text
stringlengths 8.19k
1.23M
| summary
stringlengths 342
12.7k
|
---|---|
You are an expert at summarizing long articles. Proceed to summarize the following text:
Labor and VA oversee six employment and training programs targeted to veterans (see table 1). Labor administers its programs through state workforce agencies in each state. Within Labor, VETS administers five employment programs targeted to veterans. VETS provides grants to states to support state workforce agency staff who serve veterans through the Disabled Veterans’ Outreach Program, Employment Representative Program, and TAP. Through the Homeless Veterans’ Reintegration Program and VWIP, VETS also provides funding to organizations that serve eligible veterans, including nonprofits. Labor oversees these programs through federal officials stationed in each region, as well as a Director of Veterans’ Employment and Training located in each state. Within VA, the Vocational Rehabilitation Program provides employment services to certain veterans with disabilities. VA offers the program in 56 regional offices and 169 satellite offices. The program has about 1,000 staff, including vocational rehabilitation counselors, employment coordinators, support staff, and managers. Rehabilitation counselors determine entitlement to services. In addition to its programs administered by VETS, Labor offers employment and training services to the general population—including veterans. These services are administered by the Employment and Training Administration (ETA). First, ETA administers the ES Program, which provides a national system of public employment services to all individuals seeking employment. ES provides services such as job search, labor market information, and job referrals to the public, including job seekers and employers. ETA carries out its ES Program through state workforce agencies. ETA also administers the WIA Adult and Dislocated Worker programs, which provide a broad range of services including job search assistance, skill assessment, and training for eligible individuals. When funds are limited, the WIA Adult Program is to give priority for intensive and training services to low income adults or those on public assistance. In program year 2010, 94,629 veterans exited from the WIA Adult Program. WIA’s Dislocated Worker Program generally targets adults who have been terminated or laid off from employment and meet other criteria. In program year 2010, 58,350 veterans exited from the WIA Dislocated Worker Program. Federal law requires VETS, ES, and WIA Adult and Dislocated Worker programs to offer their services through the one-stop system—which includes centers through which job seekers can access a range of employment and training programs. Two of VETS’ programs—Disabled Veterans’ Outreach and Employment Representative programs—have about 2,100 staff who work primarily in local one-stop centers. Federal law also requires other Labor-funded programs—including ES and WIA Adult and Dislocated Worker programs—to give veterans priority over the general population when they seek services (referred to as priority of service). VETS and ETA jointly monitor compliance with this requirement. Most ETA and VETS programs report the same performance measures, known as the common measures. They include percentage of program exiters who have obtained employment (entered employment rate), percentage retaining employment for 6 months after exiting the program (employment retention rate), and 6-month average earnings of program exiters (average earnings). For each of these, Labor establishes annual performance goals. VA reports an employment rehabilitation rate as a measure of performance for the Vocational Rehabilitation Program. A “rehabilitated” veteran is one who successfully completes a rehabilitation plan and is equipped with the required skills and tools needed to obtain and maintain suitable employment (i.e., employment that is consistent with the veteran’s skills, aptitudes, and interests). DOD works with Labor and VA to provide transition assistance workshops as a part of TAP. In addition, DOD helps Guard and Reserve members obtain civilian employment though its operation of several programs, including the Yellow Ribbon Program and ESGR. The Yellow Ribbon Program serves National Guard and Reserve members and their families by hosting events that provide information on employment opportunities, health care, education/training opportunities, finances, and legal benefits. The ESGR is a nationwide network of volunteers who address unemployment and underemployment of Guard and Reserve members through participation in employment-related events. As shown in figure 1, the six federal employment and training programs targeted to veterans offer similar types of employment and training services. assessment, job search or job placement activities, and job readiness skills training. Other services available from more than one of these programs include the development of job opportunities, job referrals, and occupational and vocational training, among others. Labor and VA have established a framework to coordinate their employment and training programs. In 2005, Labor and VA signed an interagency memorandum of agreement that outlines how the agencies plan to coordinate the Vocational Rehabilitation and Disabled Veterans’ Outreach and Employment Representative programs to serve disabled veterans, respectively. The agencies have also collaboratively created an interagency handbook that delineates roles and responsibilities and establishes a referral process between the Disabled Veterans’ Outreach and the Vocational Rehabilitation programs. To assist field staff, the interagency handbook also provides standard language and guidance for agreements between local Labor and VA offices. As a result, local offices from both agencies can tailor the standard agreement language to meet local situations. The handbook has not been updated since 2008. Labor and VA have provided staff with training on the handbook and formed a group to monitor coordination. Labor and VA conducted a national training webinar based on the interagency handbook for both agencies’ staff after it was published, have made virtual trainings available since 2009, and provided technical assistance to staff. To monitor the coordination activities outlined in the interagency handbook, Labor and VA created a Joint Work Group. According to Labor and VA officials, this group recently discussed and agreed on a plan to review one-third of local agreements made between Labor and VA field locations annually. Labor and VA have collected information that could be useful in updating the handbook. The Joint Work Group recently conducted its first in-depth review of states’ implementation of the handbook since it was established in 2008. According to Labor and VA officials, the Joint Work Group electronically surveyed the VA employment coordinators in all 56 VA Regional Offices and the 52 state-level directors of the Disabled While Veterans’ Outreach and Employment Representative programs. VA officials stated that they are currently reviewing the survey results to determine if the handbook needs to be updated, Labor officials told us told us they believe the handbook needs to be updated. We have reported previously that agencies need to create means to monitor agreements and related guidance periodically to keep them current and to identify areas for improvement. Our interviews with Labor and VA officials identified certain challenges with meeting desired program outcomes resulting, in part, from sections of the handbook that are subject to misunderstanding or provide insufficient guidance. They pertain to incorporating labor market information into rehabilitation plans and finding “suitable employment” for participants. The first challenge with referrals as outlined in the handbook involved ensuring that participants’ rehabilitation plans prepared them for jobs that existed in their local area. According to the referral process outlined in the interagency handbook and by agency officials (see fig. 3), there are two main referral points from the Vocational Rehabilitation Program to Labor’s Disabled Veterans’ Outreach Program staff: (1) before the participant’s rehabilitation plan is completed and (2) after the participant has completed a rehabilitation plan and been deemed job-ready, or ready for employment, by VA staff. Disabled Veterans’ Outreach Program staff may provide participants with labor market information or other employment assistance at the first referral point and are required to at the second referral point. While VA officials in four of the six states we reviewed reported that they connected participants with Labor staff to receive labor market information and other employment consulting, only three of these states reported that they did this early in the process before the rehabilitation plan is completed. In two other states, VA officials reported they understood that they were supposed to refer participants to Labor only after they had completed rehabilitation plans and were job-ready, essentially skipping the first step where labor market information may have been useful. VA officials reported that labor market information may be provided to participants through small group presentations with Disabled Veterans’ Outreach staff. For their part, state-level Labor officials noted that job placement was more challenging for Disabled Veterans’ Outreach Program staff when participants’ rehabilitation plans were developed without labor market information. In such cases, according to Labor officials, Disabled Veterans’ Outreach Program staff were sometimes working with plans focused on training in occupations not available in the local labor market—in effect using programs’ funds to prepare participants for jobs that do not exist in their local area. According to Labor officials, this made it more difficult for participants to have successful employment outcomes. The second challenge with referrals as outlined in the handbook involved ensuring that job-ready participants are directed to “suitable employment.” When veterans are referred to the Disabled Veterans’ Outreach Program at the job-ready stage, Disabled Veterans’ Outreach Program and VA staff are supposed to coordinate to find “suitable employment,” or employment that will not aggravate the participant’s disability and follows the participant’s rehabilitation plan. State-level Labor officials noted that, in some cases, veterans may choose to accept jobs they want or need but that do not fit in their employment or rehabilitation plan. Such jobs do not count as “suitable employment” for VA because the job may, in the long run, aggravate the veteran’s disability. While the handbook says agencies are to coordinate to achieve “suitable employment,” it does not explicitly say how Disabled Veterans’ Outreach Program and VA staff should deal with situations where a veteran’s financial need or preferences do not align with the goal of suitable employment. Absent guidance about how to navigate such situations, program staff may be working at cross purposes and program participants may be taking employment they cannot retain in the long run. This employment, in turn, may make a veteran’s disability worse and may make finding future employment more difficult. One official stressed that having labor market information incorporated into rehabilitation plans early may help veterans avoid taking a job that does not match their plans. DOD is expanding its employment assistance to National Guard and Reserve members, but does not have employment service agreements with Labor or VA beyond an agreement for TAP. In fiscal year 2011, DOD launched an employment assistance initiative under the Yellow Ribbon program, known as the Employment Initiative Program, that provides job workshops and job fairs to connect Guard and Reserve members to employers. The Employment Initiative Program under the Yellow Ribbon Program has hired and placed 56 “Employment Transition Coordinators” covering all 50 states, territories, and the District of Columbia who provide service members employment assistance among other services. The Yellow Ribbon Program has also held 27 job fairs since the beginning of fiscal year 2012. To support the job fairs, the Yellow Ribbon Employment Initiative Program leverages the network of 4,900 volunteers who are affiliated with ESGR. These volunteers also provide resume-building workshops, mock interviews, and career counseling. DOD also recently testified that it is leading, per a White House directive, a new Credentialing and Licensing Task Force to address gaps between military occupational specialties and civilian licensing requirements. DOD reported that this additional employment assistance is needed to support Guard and Reserve members who may not meet veteran status requirements necessary for participating in Labor or VA programs. Most of the ESGR representatives we spoke with also anticipated the program would continue to provide employment-related services. Although DOD has established these employment assistance services, no agreement or formal mechanism has been established for coordinating Specifically, them with Labor’s and VA’s veterans’ employment efforts. there is no interagency agreement for coordinating employment services beyond DOD’s and Labor’s work on the Uniformed Services Employment and Reemployment Rights Act of 1994 (USERRA) and TAP. Although DOD and VA have an agreement, it focuses on connecting service members who are leaving the military for civilian life with vocational rehabilitation services. Further, ESGR has no formal mechanism for identifying and referring eligible veterans to the Disabled Veterans’ Outreach and Employment Representative programs. We have previously reported that agencies with common goals and programs can enhance and sustain collaboration by creating mutually agreed-upon strategies to help align agencies’ activities and leverage resources to meet their common goals. Currently, ESGR in the states we reviewed reported informal coordination—such as meetings and co-participation in job fairs—with Labor-funded programs. For example, a DOD official noted that the Washington ESGR used a grant to hire 13 employment transition counselors in areas that needed service not provided by the state workforce agencies. According to this official, this ultimately increased impact while saving funds. However, this informal coordination may be affecting Labor resources and confusing employers. According to Labor officials, Disabled Veterans’ Outreach Program staff participation at DOD job fairs reduces the amount of time available for their primary duties, such as providing intensive services to program participants. A variety of officials from the states we reviewed also said that some employers were confused regarding which agency was leading the initiatives to employ veterans. Employment outcomes for veterans’ programs have generally not regained levels attained prior to the recent recession. (See appendix II for performance outcomes for each veterans’ program as well as veteran participants served by WIA Adult and ES programs over a 5-year period.) From program years 2007 to 2009—which spanned July 2007 to June 2010—most Labor veterans’ programs that have outcome measures saw a decline in their entered employment rate and a slight decline in their 6-month job retention rates. In program year 2010, all programs except VWIP had lower entered employment and employment retention rates than in program year 2006, prior to the recession. The number of VA Vocational Rehabilitation Program participants who were rehabilitated to employment has also declined from 9,225 participants in fiscal year 2006, to 7,975 participants in fiscal year 2011. Officials at both VA and Labor attributed the declines to various causes. For example, VA officials attribute some of the decline in the number of participants rehabilitated to the establishment of the Post-9/11 GI Bill Program. The Post-9/11 GI Bill Program is an education benefit administered by VA for individuals who served on active duty after September 10, 2001. According to VA officials, VA’s Vocational Rehabilitation Program lost some participants who had begun rehabilitation efforts but switched to the Post-9/11 GI Bill Program. They switched, according to VA officials, because the GI Bill Program provided a more generous living stipend than the Vocational Rehabilitation Program. At the same time, Labor officials identified national economic conditions as the primary reason for the drop in performance of its programs. In addition to the decline in outcomes for veterans’ programs, veterans participating in broader workforce programs also achieved somewhat lesser outcomes than those in the general population. (See app. II, figs. 7 and 8.) From program years 2007 to 2009, the WIA Adult and ES programs saw declines in measures for the percentage of participants who entered employment and the percentage who retained their employment for 6 months. These measures have generally rebounded slightly in 2010, although they have generally not regained levels attained prior to the recent recession. Since 2006, veterans have had slightly lower entered employment outcomes than those for all participants using the ES Program. In the WIA Adult Program, veterans’ employment and retention outcomes have been slightly lower than outcomes for all participants since 2009. Further, between 2006 and 2010, employment and retention outcomes were similar but slightly lower for veterans who worked with Disabled Veterans’ Outreach and Employment Representative program staff, in comparison with outcomes for veterans in the WIA Adult Program. According to Labor officials, some of these differences in outcomes may be explained by differences in characteristics of the populations served. They noted that veteran participants in the WIA Adult Program are more likely to be over the age of 55 than nonveteran participants, and historically, older workers have achieved lower outcomes in both the WIA Adult and ES programs. In addition, Labor officials stated that because the Disabled Veterans’ Outreach and Employment Representative programs serve veterans who face barriers to employment, their outcomes are likely to be lower than outcomes for veterans who are job- ready and nonveterans served by the WIA Adult or ES programs. While Labor reports some data on veterans’ program outcomes, it does not report the extent to which each of these programs is achieving its established performance goals. Labor provides Congress an annual veterans’ program report that provides certain performance information, such as the number of disabled and recently separated veterans who received intensive services. For this annual report, however, Labor is not required to report program outcomes in relation to performance goals. Labor sets annual performance goals for its veterans’ programs, but it is not reporting the results relative to those goals. In previous fiscal years, Labor included some of this information for the Disabled Veterans’ Outreach Program, Employment Representative Program, Homeless Veterans’ Reintegration Program, and VWIP in its agencywide performance report. However, in fiscal year 2011, it only reported aggregate goals for three programs, rather than the separate outcomes and goals it maintains for each of these veterans’ programs. In contrast, Labor’s website on general employment programs—WIA Adult and ES— includes both performance goals and outcomes. This information includes a national average for each measure comparing goals against performance, as well as each state’s negotiated goals and performance against those goals. Further, VA reports both an employment outcome and associated goal for the Vocational Rehabilitation Program. We have previously reported that relevant performance information should be reported both internally and externally in order to maintain accountability and transparency for achieving results. Without information on how the outcomes for each veterans’ program compare against their annual performance goals, Congress and other key stakeholders lack essential information needed to assess the performance of the program. Labor is working to implement new performance measures which have been mandated by the VOW to Hire Heroes Act of 2011 (VOW Act).Specifically, the act requires Labor to measure participants’ median earnings 90 and 180 days after a participant stops using a veterans’ program. Prior to the VOW Act, Labor only measured participants’ average earnings over 6 months after participants stop using a veterans’ program, for those who retained employment. The VOW Act also requires Labor to track the percentage of participants obtaining a certificate, degree, diploma, licensure, or industry-recognized credential after participating in its veterans’ programs. VA also plans to collect additional information about its programs’ outcomes. VA officials said that they decided to track the number, in addition to the rate, of veterans rehabilitated to employment, because the employment rehabilitation rate can fluctuate based on a number of factors. employment in fiscal year 2012. VA officials said that this is the first fiscal year this goal has been used. Consequently, VA has not yet reported its performance against this goal. VA has set a national goal to rehabilitate 9,000 veterans to In addition, the Vocational Rehabilitation Program has established a working group to develop new national performance measures. According to VA officials, the new measures will focus on the middle of the rehabilitation process, because a veteran can be in the program from 1 to 6 years, with an average of 4 years. The measures that already exist focus on the front-end (e.g., timeliness of services) and back-end (e.g., outcomes). Although the new measures have not been finalized, VA plans to implement them in fiscal year 2014, contingent on resources to make changes to the program’s database structure to capture data and report on new measures. For example, the rehabilitation rate can be negatively affected by veterans who choose to stop participating before completing a rehabilitation plan. participant outcomes, these studies can be difficult and potentially expensive to conduct. Impact evaluations can be designed in several ways, but fall into two basic design categories: experimental, using random assignment, and quasiexperimental. Quasiexperimental designs use a comparison group that is not created with random assignment. While Labor has not conducted impact evaluations, it has conducted research that examines veterans’ outcomes in relation to their characteristics and has other studies planned or under way (see table 2). These studies, though, have limitations. For example, Labor’s 2007 study of veterans’ outcomes covered five states, and its findings cannot be generalized to all states. In addition, in the study conducted on the Homeless Veterans’ Reintegration Program, researchers lacked access to participant-level data and consequently could not determine whether certain veterans’ characteristics were associated with positive or negative employment outcomes for the program as a whole. Labor is funding an evaluation of the pilot of the redesigned TAP, but has not conducted any studies or evaluations of VWIP in the last 10 years. While Labor has not conducted impact evaluations of its veterans’ employment and training programs, it is funding an impact evaluation of the WIA Adult and Dislocated Worker programs, which is planned to be completed in 2015. This study will include a supplemental study of veterans using the public workforce system, but this portion of the study, as described in the draft research plan for the study, is not an impact evaluation and cannot determine the extent to which veterans’ outcomes are due to the services they receive in the public workforce system. Similar to Labor, VA has not conducted evaluations that allow it to determine if veterans’ employment outcomes result from program services or if they are the result of other factors. As shown in table 3, VA has funded research that examines data related to the completion of veterans’ rehabilitation plans and participant outcomes. For example, VA is funding a longitudinal study of Vocational Rehabilitation Program participants and has issued two reports on the study. The most recent report begins to analyze VA administrative data to determine characteristics associated with completing rehabilitation or discontinuing the program within the first 2 years. However, the report states that its findings thus far are only descriptive and may have little or no predictive value. VA plans further study of emerging trends. VA also plans additional follow-up of program participants in its case management process. Specifically, the agency plans to send a questionnaire to collect information on whether former participants are employed and whether they need additional services. Given that the number of service members transitioning to civilian employment is expected to increase and the number of veterans with service-connected disabilities is on the rise, Labor’s Disabled Veterans’ Outreach Program is likely to see an increased demand for its services. Labor attempts to maximize the employment services for those veterans who need them most. However, we found that there is a need for clearer guidance to states on how to prioritize services and additional monitoring of their implementation of such guidance. Labor said it is developing such guidance but has not completed it, and has tested new monitoring protocols in six states but has not finalized them. It is encouraging that Labor has these efforts under way, and it will be important for the department to complete both efforts. Labor and VA both provide employment and training programs targeted to veterans. Although Labor and VA have a handbook governing their coordination with respect to employment and training for veterans, it has not been updated since 2008. Our work identified sections of the handbook that provided insufficient guidance, resulting in situations where the practices of one department presented difficulties for the other in meeting desired program outcomes. At the same time, DOD has begun expanding employment assistance initiatives to segments of the veteran population, such as National Guard and Reserve members, some of whom may also meet Labor and VA veterans’ programs eligibility requirements. However, Labor and VA’s agreement does not govern their coordination with DOD’s programs. Without an agreement that includes all three departments, efforts to help veterans find employment are at greater risk of being fragmented or overlapping, and may not leverage federal resources. Finally, the federal investment in veterans’ employment and training programs warrants greater transparency with regard to the extent to which these programs are meeting their performance goals and whether outcomes are attributable to program participation and not other factors. Labor reports substantial information on outcomes for these programs. However, Labor is not consistently reporting the extent to which outcomes for each of its veterans’ programs are achieving the specific performance goals that were established for these programs. This stands in contrast to the level of performance reporting by Labor for its WIA Adult and ES programs, which identifies the extent to which outcomes in these programs are achieving performance goals. In addition, while the federal government makes a substantial investment in Labor and VA programs to achieve employment outcomes for veterans, neither agency has conducted studies to see if these outcomes can be attributed to the programs’ services, instead of other factors. As a result, Congress and other stakeholders lack essential information to assess how well these programs are performing and hold federal agencies accountable for achieving results. We are making the following four recommendations based on our review: To increase the effectiveness of coordination efforts, the Secretaries of Labor and VA should incorporate additional guidance to address the two problem areas we identified into any update to the interagency handbook that governs their coordination for veterans’ employment and training programs. To ensure government resources are used efficiently, the Secretaries of Labor, VA, and DOD should incorporate DOD’s employment assistance initiatives into the agreements that guide interagency coordination. To enhance transparency and accountability for achieving results, the Secretary of Labor should consistently report both performance goals and associated performance outcomes for each of its veterans’ employment and training programs. To assess veterans’ employment programs’ effectiveness, Secretaries of Labor and VA should, to the extent possible, determine the extent to which veterans’ employment outcomes result from program participation or are the result of other factors. We provided a draft of this report to the Department of Labor, the Department of Veterans Affairs, and the Department of Defense for review and comment. Written comments from Labor, VA, and DOD appear in appendixes III, IV, and V, respectively. In addition to the comments discussed below, Labor, VA, and DOD provided technical comments that we incorporated where appropriate. All three agencies generally concurred with our recommendations. Both Labor and VA said they would work to enhance coordination with each other with respect to the guidance in their interagency handbook. All three agencies said they would work to ensure interagency coordination included DOD. In response to our recommendation on reporting program performance, Labor said it will explore ways to increase consistency and transparency of the information it reports. In response to our recommendation to Labor and VA regarding assessing program effectiveness, VA concurred and Labor did not specify whether or not it agreed. Labor said that it is committed to robust program evaluation and that each agency, including VETS, develops an annual evaluation agenda and sets priorities. Labor said it has a multi-component agenda for evaluating services to veterans and cited some current studies, such as a study of the TAP program and a statistical analysis of services received by veterans and their outcomes using the public workforce system. We think obtaining information about the effectiveness of veterans' programs is important because such information can assist Congress in assessing program results and identifying areas where adjustments may be needed. As Labor and VA conduct research on program outcomes, it will be important for them to consider approaches that would enable them to separate the impact of their programs from other factors that might influence participants’ outcomes. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to appropriate congressional committees, the Secretary of Labor, the Secretary of Veterans Affairs, the Secretary of Defense, and other interested parties. In addition, this report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. In a January 2011 report, we identified six employment and training programs administered by the Department of Labor (Labor) and the Department of Veterans Affairs (VA) targeted toward veterans as a part of a larger review of all federal employment and training programs. We defined an employment and training program as one specifically designed to enhance the specific job skills of individuals in order to increase their employability, identify job opportunities, or help job seekers obtain employment. Labor oversees five of these programs for veterans: (1) the Disabled Veterans’ Outreach Program, (2) the Homeless Veterans’ Reintegration Program, (3) the Local Veterans’ Employment Representative Program (Employment Representative Program), (4) the Transition Assistance Program (TAP), and (5) the Veterans’ Workforce Investment Program (VWIP). VA oversees the sixth program called the Vocational Rehabilitation & Employment Program (Vocational Rehabilitation Program).requirements, and outcome measures that these programs had in common. Our 2011 report identified services, eligibility For this report, we focused on the six programs identified in our January 2011 report in more detail and examined (1) the extent to which federal veterans’ employment and training programs vary in terms of the services they deliver and the veterans who receive them; (2) the extent to which federal agencies coordinate these programs; and (3) what is known about the performance of these programs. Our approach overall involved reviewing relevant literature, relevant federal laws and regulations, and analyzing Labor and VA data on veteran participants, services provided, and performance. We also interviewed federal Labor, VA, and Department of Defense (DOD) agency officials who govern agency policy at the national level and key stakeholder associations. To determine the extent to which these programs vary in terms of the services they deliver and the veterans receive them, we analyzed Labor and VA data on services provided and veteran participants, agency annual reports, budget justifications, and other agency policy documents. To assess the reliability of Labor’s data on services to veterans in one- stops, we reviewed available information about the data and conducted interviews with officials knowledgeable about the data. We determined that these data were sufficiently reliable for our purposes. We reviewed relevant federal laws and regulations to determine program eligibility requirements. We interviewed agency officials, veterans’ service organizations, and workforce associations to better understand why programs may serve similar populations with similar services. We reviewed state plans and interviewed state-level Labor, VA, and DOD staff for our case study states. In addition, we conducted six case studies at the state level. In each state, we reviewed state plans and interviewed Labor and VA officials assigned to the state or the region. We also interviewed the Employer Support of the Guard and Reserve (ESGR) chairperson operating in the state. These are federal representatives of DOD, but not federal employees, as they are volunteers. In addition, we interviewed the directors of state workforce agencies, which carry out veterans’ employment and training programs using federal funds. Finally, within three states, we interviewed the Director of Veterans Affairs in each state, a state government official responsible for veterans’ programs and services. To select states, for all 50 states, we determined whether each was high, medium, or low on the following characteristics: the percentage of veteran population, amount of program expenditures, program performance (veterans’ entered employment rate), and veterans’ unemployment rate. We selected one state from each of Labor’s six regions to achieve variation on the above characteristics, as well as diversity in terms of geography and state size. These states were Florida, Massachusetts, Ohio, Oregon, Texas, and Virginia. To determine the extent to which federal agencies coordinate these programs, we reviewed key agency agreements and guidance, and used the same six case studies at the state level, and interviewed federal and state agency officials and associations representing the interests of veterans. In examining coordination we included not only the five programs indicated above but also three Labor programs available to the general population: the Workforce Investment Act (WIA) Adult and Dislocated Worker and the Employment Service (ES) programs. We also included programs recently begun by DOD: the Yellow Ribbon and ESGR programs. We reviewed memoranda of understanding, agency guidance, and other policy documents related to collaborative efforts among federal agencies. We also interviewed National Veterans’ Training Institute officials to discuss the extent to which required training for outreach specialists and employment representatives includes instruction on how to foster inter- and intra-agency coordination. In our case studies in six states, we interviewed state-level officials from the Veterans’ Employment and Training Services, as well as VA officials. We also interviewed ESGR officials and Directors of State Offices of Veterans Affairs. To understand stakeholders’ views on coordination, we interviewed officials from workforce associations and veterans’ service organizations. We also used data from the Defense Manpower Data Center to determine the number of Guard and Reserve members that may meet the eligibility requirements for Labor veterans’ programs and VA’s Vocational Rehabilitation Program. We assessed the reliability of information on Guard and Reserve members’ length of service and disability status, and determined that these data were sufficiently reliable for the purposes of this report. To determine what is known about program performance, we analyzed relevant federal laws and regulations, and agency documents, and interviewed agency officials and stakeholders. We reviewed agency reports on veterans’ programs containing information on program outcomes and agency goals established for these programs, such as Labor’s annual report to Congress on veterans’ programs and agencywide performance reports. We assessed Labor and VA data on participant employment outcomes by reviewing available information about the data and conducting interviews with officials knowledgeable about the data. We determined that these data were sufficiently reliable for our purposes. We reviewed the design and methodology of relevant agency-sponsored program evaluations using GAO criteria on program evaluation design. We also interviewed Labor and VA national and regional officials. In addition to the individual named above, Patrick Dibattista (Assistant Director), Sheranda Campbell, Maria Gaona, and Dana Hopings made key contributions to this report. In addition, key support provided by James Bennett, David Chrisinger, Holly Dye, Rachel Frisk, David Forgosh, Alexander Galuten, Kathy Leslie, Ashley McCall, and Walter Vance. | In fiscal year 2011, the federal government spent an estimated $1.2 billion on six veterans' employment and training programs, serving about 880,000 participants. Labor administers five of these programs and VA administers one. Despite these efforts, the unemployment rate for veterans who have recently separated from the military is higher than that for the civilian population. The number of service members transitioning to the civilian workforce is expected to increase. In response to a request, this report examines (1) the extent to which federal veterans' employment and training programs vary in services they deliver and veterans who receive them; (2) the extent to which federal agencies coordinate programs; and (3) what is known about the performance of these programs. To address these objectives, GAO reviewed agency data, policy documents, and relevant federal laws and regulations, reports, and studies, and interviewed federal and regional officials and state officials in six states selected to achieve geographic and demographic diversity. In examining coordination, GAO included in its review employment assistance DOD provides to Guard and Reserve members. The six federal veterans' employment and training programs offer similar employment services, but largely target different groups. Among these programs, the Department of Labor's (Labor) Disabled Veterans' Outreach Program has the greatest potential for overlap with other veterans' programs and Labor's employment programs for the general population. Federal law governing the Disabled Veterans' Outreach Program makes all veterans who meet the broad definition of "eligible veteran" eligible for its services, but gives disabled veterans and economically and educationally disadvantaged veterans the highest priority for services. However, Labor's guidance does not provide states--who administer the program using federal funds--criteria for prioritizing services. The law also generally requires that program staff provide participants with intensive services (e.g., individual employment plans), but Labor's data indicate that nationally 28 percent of participants received such services in 2011. In explaining this statistic, Labor officials said one possible explanation was that staff are enrolling people who do not need intensive services. Labor said it plans to develop guidance on prioritizing services, and it also has a six-state pilot to improve monitoring, but neither of these efforts has been completed. In 2008, Labor and the Department of Veterans Affairs (VA) compiled a handbook intended to guide the roles of their respective staff in coordinating services to disabled veterans; however, they have not updated the handbook nor included related Department of Defense (DOD) employment initiatives in their interagency agreements. GAO's interviews with VA and Labor officials identified certain challenges with meeting desired program outcomes resulting, in part, from sections of the handbook that provide insufficient guidance or are subject to misunderstanding. For example, the handbook says Labor and VA are to coordinate to achieve "suitable employment"--employment that follows the veteran's rehabilitation plan and does not aggravate the disability. However, it does not explicitly say how staff should navigate situations where a veteran's financial need or preferences do not align with this goal. In such instances, program staff may work at cross purposes and veterans may accept jobs that do not count as suitable employment. Further, DOD is expanding its employment assistance, but does not have an interagency agreement to coordinate with Labor and VA efforts. Absent an updated handbook and integration of DOD into the coordination framework, there is increased risk for poor coordination and program overlap. While available performance information shows that most programs' outcomes are below pre-2007 levels, the information Labor reports and the research it has conducted make it difficult to know the extent to which each program is achieving its annual performance goals. Veterans' employment outcomes for programs administered by both Labor and VA have generally not regained levels seen before the recession that began in 2007, which is similar to employment programs for the general population. In reporting performance, Labor does not relate employment outcomes to individual program goals. In contrast, Labor reports outcomes and goals for its other workforce programs aimed at the general population. Moreover, while both agencies have studies completed or under way, neither has conducted impact evaluations that assess program effectiveness to determine whether outcomes are attributable to program participation and not other factors. As a result, Congress and other key stakeholders lack essential information needed to assess each program's performance. GAO is making four recommendations aimed at improving the guidance provided to staff in the coordination handbook, integrating DOD into the interagency coordination framework, improving agency reporting on achievement of program performance goals, and assessing program effectiveness. GAO is making four recommendations aimed at improving the guidance provided to staff in the coordination handbook, integrating DOD into the interagency coordination framework, improving agency reporting on achievement of program performance goals, and assessing program effectiveness. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The Department of the Navy (DON) is a major component of DOD, consisting of two uniformed services: the Navy and the Marine Corps. The Marine Corps’ primary mission is to serve as a “total force in readiness” by responding quickly in a wide spectrum of responsibilities, such as attacks from sea to land in support of naval operations, air combat, and security of naval bases. As the only service that operates in three dimensions—in the air, on land, and at sea, the Marine Corps must be equipped to provide rapid and precise logistics support to operating forces in any environment. The Marine Corps’ many and dispersed organization components rely heavily on IT to perform their respective mission-critical operations and related business functions, such as logistics and financial management. For fiscal year 2008, the Marine Corps budget for IT business systems is about $1.3 billion, of which $746 million (57 percent) is for operations and maintenance of existing systems and $553 million (43 percent) is for systems development and modernization. Of the approximately 904 systems in DON’s current inventory, the Marine Corps accounts for 81, or about 9 percent, of the total. The GCSS-MC is one such system investment. According to DOD, it is intended to address the Marine Corps’ long- standing problem of stove-piped logistics systems that collectively provide limited data visibility and access, are unable to present a common, integrated logistics picture in support of the warfighter, and do not provide important decision support tools. In September 2003, the Marine Corps initiated GCSS-MC to (1) deliver integrated functionality across the logistics areas (e.g., supply and maintenance), (2) provide timely and complete logistics information to authorized users for decision making, and (3) provide access to logistics information and applications regardless of location. The system is intended to function in three operational environments—deployed operations (i.e., in theater of war or exercise environment on land or at sea), in-transit, and in garrison. When GCSS-MC is fully implemented, it is to support about 33,000 users located around the world. GCSS-MC is being developed in a series of large and complex increments using commercially available enterprise resource planning (ERP) software and hardware components. The first increment is currently the only funded portion of the program and is to provide a range of asset management capabilities, including planning inventory requirements to support current and future requesting and tracking the status of products (e.g., supplies and personnel) and services (e.g., maintenance and engineering); allocating resources (e.g., inventory, warehouse capacity, and personnel) to support unit demands for specific products; and scheduling maintenance resources (e.g., manpower, equipment, and supplies) for specific assets, such as vehicles. Additionally, the first increment is to replace four legacy systems scheduled for retirement in 2010. Table 1 describes these four systems. Future increments are to provide additional functionality (e.g., transportation and wholesale inventory management), enhance existing functionality, and potentially replace up to 44 additional legacy systems. The program office estimates the total life cycle cost for the first increment to be about $442 million, including $169 million for acquisition and $273 million for operations and maintenance. The total life cycle cost of the entire program has not yet been determined because future increments are currently in the planning stages and have not been defined. As of April 2008, the program office reported that approximately $125 million has been spent on the first increment. To manage the acquisition and deployment of GCSS-MC, the Marine Corps established a program management office within the Program Executive Office for Executive Information Systems. The program office is led by the Program Manager who is responsible for managing the program’s scope and funding and ensuring that the program meets its objectives. To accomplish this, the program office is responsible for key acquisition management controls, such as architectural alignment, economic justification, EVM, requirements management, risk management, and system quality measurement. In addition, various DOD and DON organizations share program oversight and review activities relative to these and other acquisition management controls. A listing of key entities and their roles and responsibilities is in table 2. The program reports that the first increment of GCSS-MC is currently in the system development and demonstration phase of the defense acquisition system (DAS). The DAS consists of five key program life cycle phases and three related milestone decision points. These five phases and related milestones are described along with a summary of key program activities completed during, or planned, for each phase as follows: 1. Concept refinement: The purpose of this phase is to refine the initial system solution (concept) and create a strategy for acquiring the investment solution. During this phase, the program office defined the acquisition strategy and analyzed alternative solutions. The first increment completed this phase on July 23, 2004, which was 1 month later than planned, and the MDA approved a Milestone A decision to move to the next phase. 2. Technology development: The purpose of this phase is to determine the appropriate set of technologies to be integrated into the investment solution by iteratively assessing the viability of various technologies while simultaneously refining user requirements. During this phase, the program office selected Oracle’s E-Business Suite as the commercial off-the-shelf ERP software. In addition, the program office awarded Accenture the system integration contract to, among other things, configure the software, establish system interfaces, and implement the new system. This system integration contract was divided into two phases—Part 1 for the planning, analysis, and conceptual design of the solution and Part 2 for detailed design, build, test, and deployment of the solution. The program office did not exercise the option for Part 2 of the contract to Accenture and shortly thereafter established a new program baseline in June 2006. In November 2006, it awarded a time-and-materials system integration contract valued at $28.4 million for solution design to Oracle. The first increment completed this phase on June 8, 2007, which was 25 months later than planned due in part to contractual performance shortfalls, and the MDA approved a Milestone B decision to move to the next phase. 3. System development and demonstration: The purpose of this phase is to develop the system and demonstrate through developer testing that the system can function in its target environment. During this phase, the program office extended the solution design contract and increased funding to $67.5 million due, in part, to delays in completing the detailed design activities. As a result, the program office has not yet awarded the next contract (which includes both firm-fixed-price and time-and-materials task orders) for build and testing activities, originally planned for July 2007. Instead, it entered what it termed a “transition period” to complete detailed design activities. According to the program’s baseline, the MDA is expected to approve a Milestone C decision to move to the next phase in October 2008. However, program officials stated that Milestone C is now scheduled for April 2009, which is 35 months later than originally planned. 4. Production and deployment: The purpose of this phase is to achieve an operational capability that satisfies the mission needs, as verified through independent operational test and evaluation, and implement the system at all applicable locations. The program office plans to award a separate firm-fixed-price plus award fee contract for these activities with estimated costs yet to be determined. 5. Operations and support: The purpose of this phase is to operationally sustain the system in the most cost-effective manner over its life cycle. The details of this phase have not yet been defined. Overall, GCSS-MC was originally planned to reach full operational capability (FOC) in fiscal year 2007 at an estimated cost of about $126 million over a 7-year life cycle. This cost estimate was later revised in 2005 to about $249 million over a 13-year life cycle. However, the program now expects to reach FOC in fiscal year 2010 at a cost of about $442 million over a 12-year life cycle. Figures 1 and 2 show the program’s current status against original milestones and original, revised, and current cost estimates. Acquisition best practices are tried and proven methods, processes, techniques, and activities that organizations define and use to minimize program risks and maximize the chances of a program’s success. Using best practices can result in better outcomes, including cost savings, improved service and product quality, and a better return on investment. For example, two software engineering analyses of nearly 200 systems acquisition projects indicate that teams using systems acquisition best practices produced cost savings of at least 11 percent over similar projects conducted by teams that did not employ the kind of rigor and discipline embedded in these practices. In addition, our research shows that best practices are a significant factor in successful acquisition outcomes and increase the likelihood that programs and projects will be executed within cost and schedule estimates. We and others have identified and promoted the use of a number of best practices associated with acquiring IT systems. See table 3 for a description of several of these activities. We have previously reported that DOD has not effectively managed a number of business system investments. Among other things, our reviews of individual system investments have identified weaknesses in such areas as architectural alignment and informed investment decision making, which are also the focus areas of the Fiscal Year 2005 National Defense Authorization Act business system provisions. Our reviews have also identified weaknesses in other system acquisition and investment management areas—such as EVM, economic justification, requirements management, risk management, and test management. Most recently, for example, we reported that the Army’s approach to investing about $5 billion over the next several years in its General Fund Enterprise Business System, Global Combat Support System-Army Field/Tactical, and Logistics Modernization Program did not include alignment with Army enterprise architecture or use a portfolio-based business system investment review process. Moreover, we reported that the Army did not have reliable analyses, such as economic analyses, to support its management of these programs. We concluded that until the Army adopts a business system investment management approach that provides for reviewing groups of systems and making enterprise decisions on how these groups will collectively interoperate to provide a desired capability, it runs the risk of investing significant resources in business systems that do not provide the desired functionality and efficiency. Accordingly, we made recommendations aimed at improving the department’s efforts to achieve total asset visibility and enhancing its efforts to improve its control and accountability over business system investments. The department agreed with our recommendations. We also reported that DON had not, among other things, economically justified its ongoing and planned investment in the Naval Tactical Command Support System (NTCSS) and had not invested in NTCSS within the context of a well-defined DOD or DON enterprise architecture. In addition, we reported that DON had not effectively performed key measurement, reporting, budgeting, and oversight activities and had not adequately conducted requirements management and testing activities. We concluded that, without this information, DON could not determine whether NTCSS, as defined, and as being developed, is the right solution to meet its strategic business and technological needs. Accordingly, we recommended that the department develop the analytical basis to determine if continued investment in the NTCSS represents prudent use of limited resources and to strengthen management of the program, conditional upon a decision to proceed with further investment in the program. The department largely agreed with these recommendations. In addition, we reported that the Army had not defined and developed its Transportation Coordinators’ Automated Information for Movements System II (TC-AIMS II)—a joint services system with the goal of helping to manage the movement of forces and equipment within the United States and abroad—in the context of a DOD enterprise architecture. We also reported that the Army had not economically justified the program on the basis of reliable estimates of life cycle costs and benefits and had not effectively implemented risk management. As a result, we concluded that the Army did not know if its investment in TC-AIMS II, as planned, is warranted or represents a prudent use of limited DOD resources. Accordingly, we recommended that DOD, among other things, develop the analytical basis needed to determine if continued investment in TC-AIMS II, as planned, represents prudent use of limited defense resources. In response, the department largely agreed with our recommendations and has since reduced the program’s scope by canceling planned investments. DOD IT-related acquisition policies and guidance, along with other relevant guidance, provide an acquisition management control framework within which to manage business system programs like GCSS-MC. Effective implementation of this framework can minimize program risks and better ensure that system investments are defined in a way to optimally support mission operations and performance, as well as deliver promised system capabilities and benefits on time and within budget. Thus far, GCSS-MC has not been managed in accordance with key aspects of this framework, which has already contributed to more than 3 years in program schedule delays and about $193 million in cost increases. These IT acquisition management control weaknesses include compliance with DOD’s federated BEA not being sufficiently expected costs not being reliably estimated; earned value management not being adequately implemented; system requirements not always being effectively managed, although this has recently improved; key program risks not being effectively managed; and key system quality measures not being used. The reasons that these key practices have not been sufficiently executed include limitations in the applicable DOD guidance and tools, and not collecting relevant data, each of which is described in the applicable sections of this report. By not effectively implementing these key IT acquisition management controls, the program has already experienced sizeable schedule and cost increases, and it is at increased risk of (1) not being defined in a way that best meets corporate mission needs and enhances performance and (2) costing more and taking longer than necessary to complete. DOD and federal guidance recognize the importance of investing in business systems within the context of an enterprise architecture. Moreover, the 2005 National Defense Authorization Act requires that defense business systems be compliant with DOD’s federated BEA. Our research and experience in reviewing federal agencies show that not making investments within the context of a well-defined enterprise architecture often results in systems that are duplicative, are not well integrated, are unnecessarily costly to interface and maintain, and do not optimally support mission outcomes. To its credit, the program office has followed DOD’s BEA compliance guidance. However, this guidance does not adequately provide for addressing all relevant aspects of BEA compliance. Moreover, DON’s enterprise architecture, which is a major component of DOD’s federated BEA, as well as key aspects of DOD’s corporate BEA, have yet to be sufficiently defined to permit thorough compliance determinations. In addition, current policies and guidance do not require DON investments to comply with its enterprise architecture. This means that the department does not have a sufficient basis for knowing if GCSS-MC has been defined to optimize DON and DOD business operations. Each of these architecture alignment limitations is discussed as follows: The program’s compliance assessments did not include all relevant architecture products. In particular, the program did not assess compliance with the BEA’s technical standards profile, which outlines, for example, the standards governing how systems physically communicate with other systems and how they secure data from unauthorized access. This is particularly important because systems, like GCSS-MC, need to employ common standards in order to effectively and efficiently share information with other systems. A case in point is GCSS-MC and the Navy Enterprise Resource Planning program. Specifically, GCSS-MC has identified 13 technical standards that are not in the BEA technical standards profile, and Navy Enterprise Resource Planning has identified 25 technical standards that are not in the profile. Of these, some relate to networking protocols, which could limit information sharing between these and other systems. In addition, the program office did not assess compliance with the BEA products that describe system characteristics. This is important because doing so would create a body of information about programs that could be used to identify common system components and services that could potentially be shared by the programs, thus avoiding wasteful duplication. For example, our analysis of GCSS-MC program documentation shows that they contain such system functions as receiving goods, taking physical inventories, and returning goods, which are also system functions cited by the Navy Enterprise Resource Planning program. However, because compliance with the BEA system products was not assessed, the extent to which these functions are potentially duplicative was not considered. Furthermore, the program office did not assess compliance with BEA system products that describe data exchanges among systems. As we previously reported, establishing and using standard system interfaces is a critical enabler to sharing data. For example, GCSS-MC program documentation indicates that it is to exchange order and status data with other systems. However, the program office has not fully developed its architecture product describing these exchanges and thus does not have the basis for understanding how its approach to exchanging information differs from that of other systems that it is to interface with. Compliance against each of these BEA products was not assessed because DOD’s compliance guidance does not provide for doing so and, according to BTA and program officials, some BEA and program-level architecture products are not sufficiently defined. According to these officials, BTA plans to continue to define these products as the BEA evolves. The compliance assessment was not used to identify potential areas of duplication across programs, which DOD has stated is an explicit goal of its federated BEA and associated investment review and decision-making processes. More specifically, even though the compliance guidance provides for assessing programs’ compliance with the BEA product that defines DOD operational activities, and GCSS-MC was assessed for compliance with this product, the results were not used to identify programs that support the same operational activities and related business processes. Given that the federated BEA is intended to identify and avoid not only duplications within DOD components, but also between DOD components, it is important that such commonality be addressed. For example, program-level architecture products for GCSS-MC and Navy Enterprise Resource Planning, as well as two Air Force programs (Defense Enterprise Accounting and Management System-Air Force and the Air Force Expeditionary Combat Support System) show that each supports at least six of the same BEA operational activities (e.g., conducting physical inventory, delivering property, and services) and three of these four programs support at least 18 additional operational activities (e.g., performing budgeting, managing receipt, and acceptance). As a result, these programs may be investing in duplicative functionality. Reasons for not doing so were that compliance guidance does not provide for such analyses to be conducted, and programs have not been granted access rights to use this functionality. The program’s compliance assessment did not address compliance against the DON’s enterprise architecture, which is one of the biggest members of the federated BEA. This is particularly important given that DOD’s approach to fully satisfying the architecture requirements of the 2005 National Defense Authorization Act is to develop and use a federated architecture in which component architectures are to provide the additional details needed to supplement the thin layer of corporate policies, rules, and standards included in the corporate BEA. As we recently reported, the DON’s enterprise architecture is not mature because, among other things, it is missing a sufficient description of its current and future environments in terms of business and information/data. However, certain aspects of an architecture nevertheless exist and, according to DON, these aspects will be leveraged in its efforts to develop a complete enterprise architecture. For example, the FORCEnet architecture documents DON’s technical infrastructure. Therefore, opportunities exist for DON to assess its programs in relation to these architecture products and to understand where its programs are exposed to risks because products do not exist, are not mature, or are at odds with other DON programs. According to DOD officials, compliance with the DON architecture was not assessed because DOD compliance policy is limited to compliance with the corporate BEA, and the DON enterprise architecture has yet to be sufficiently developed. The program’s compliance assessment was not validated by DOD or DON investment oversight and decision-making authorities. More specifically, neither the DOD IRBs nor the DBSMC, nor the BTA in supporting both of these investment oversight and decision-making authorities, reviewed the program’s assessments. According to BTA officials, under DOD’s tiered approach to investment accountability, these entities are not responsible for validating programs’ compliance assessments. Rather, this is a component responsibility, and thus they rely on the military departments and defense agencies to validate the assessments. However, the DON Office of the CIO, which is responsible for precertifying investments as compliant before they are reviewed by the IRB, did not evaluate any of the programs’ compliance assessments. According to CIO officials, they rely on Functional Area Managers to validate a program’s compliance assessments. However, no DON policy or guidance exists that describes how the Functional Area Managers should conduct such validations. Validation of program assessments is further complicated by the absence of information captured in the assessment tool about what program documentation or other source materials were used by the program office in making its compliance determinations. Specifically, the tool is only configured, and thus was only used, to capture the results of a program’s comparison of program architecture products to BEA products. Thus, it was not used to capture the system products used in making these determinations. In addition, the program office did not develop certain program-level architecture products that are needed to support and validate the program’s compliance assessment and assertions. According to the compliance guidance, program-level architecture products, such as those defining information exchanges and system data requirements are not required to be used until after the system has been deployed. This is important because waiting until the system is deployed is too late to avoid the costly rework required to address areas of noncompliance. Moreover, it is not consistent with other DOD guidance, which states that program- level architecture products that describe, for example, information exchanges, should be developed before a program begins system development. The limitations in existing BEA compliance-related policy and guidance, the supporting compliance assessment tool, and the federated BEA, puts programs like GCSS-MC at increased risk of being defined and implemented in a way that does not sufficiently ensure interoperability and avoid duplication and overlap. We currently have a review under way for the Senate Armed Services Committee, Subcommittee on Readiness and Management Support, which is examining multiple programs’ compliance with the federated BEA. The investment in the first increment of GCSS-MC has not been economically justified on the basis of reliable analyses of estimated system costs over the life of the program. According to the program’s economic analysis, the first increment will have an estimated life cycle cost of about $442 million and deliver an estimated $1.04 billion in risk-adjusted estimated benefits during this same life cycle. This equates to a net present value of about $688 million. While the most recent cost estimate was derived using some effective estimating practices, it did not make use of other practices that are essential to having an accurate and credible estimate. As a result, the Marine Corps does not have a sufficient basis for deciding whether GCSS-MC, as defined, is the most cost-effective solution to meeting its mission needs, and it does not have a reliable basis against which to measure cost performance. A reliable cost estimate is critical to the success of any IT program, as it provides the basis for informed investment decision making, realistic budget formulation and program resourcing, meaningful progress measurement, proactive course correction, and accountability for results. According to the Office of Management and Budget (OMB), programs must maintain current and well-documented cost estimates, and these estimates must encompass the full life cycle of the program. OMB states that generating reliable cost estimates is a critical function necessary to support OMB’s capital programming process. Without reliable estimates, programs are at increased risk of experiencing cost overruns, missed deadlines, and performance shortfalls. Our research has identified a number of practices for effective program cost estimating. We have issued guidance that associates these practices with four characteristics of a reliable cost estimate. These four characteristic are specifically defined as follows: Comprehensive: The cost estimates should include both government and contractor costs over the program’s full life cycle, from the inception of the program through design, development, deployment, and operation and maintenance, to retirement. They should also provide a level of detail appropriate to ensure that cost elements are neither omitted nor double counted and include documentation of all cost-influencing ground rules and assumptions. Well-documented: The cost estimates should have clearly defined purposes and be supported by documented descriptions of key program or system characteristics (e.g., relationships with other systems, performance parameters). Additionally, they should capture in writing such things as the source data used and their significance, the calculations performed and their results, and the rationale for choosing a particular estimating method or reference. Moreover, this information should be captured in such a way that the data used to derive the estimate can be traced back to, and verified against, their sources. The final cost estimate should be reviewed and accepted by management on the basis of confidence in the estimating process and the estimate produced by the process. Accurate: The cost estimates should provide for results that are unbiased and should not be overly conservative or optimistic (i.e., should represent the most likely costs). In addition, the estimates should be updated regularly to reflect material changes in the program, and steps should be taken to minimize mathematical mistakes and their significance. The estimates should also be grounded in a historical record of cost estimating and actual experiences on comparable programs. Credible: The cost estimates should discuss any limitations in the analysis performed that are due to uncertainty or biases surrounding data or assumptions. Further, the estimates’ derivation should provide for varying any major assumptions and recalculating outcomes based on sensitivity analyses, and the estimates’ associated risks and inherent uncertainty should be disclosed. Also, the estimates should be verified based on cross- checks using other estimating methods and by comparing the results with independent cost estimates. The $442 million life cycle cost estimate for the first increment reflects many of the practices associated with a reliable cost estimate, including all practices associated with being comprehensive and well-documented, and several related to being accurate and credible. (See table 4.) However, several important accuracy and credibility practices were not satisfied. The cost estimate is comprehensive because it includes both the government and contractor costs specific to development, acquisition (nondevelopment), implementation, and operations and support over the program’s 12-year life cycle. Moreover, the estimate clearly describes how the various subelements are summed to produce the amounts for each cost category, thereby ensuring that all pertinent costs are included, and no costs are double counted. Lastly, cost-influencing ground rules and assumptions, such as the program’s schedule, labor rates, and inflation rates are documented. The cost estimate is also well-documented in that the purpose of the cost estimate was clearly defined, and a technical baseline has been documented that includes, among others things, the relationships with other systems and planned performance parameters. Furthermore, the calculations and results used to derive the estimate are documented, including descriptions of the methodologies used and traceability back to source data (e.g., vendor quotes, salary tables). Also, the cost estimate was reviewed both by the Naval Center for Cost Analysis and the Office of the Secretary of Defense, Director for Program Analysis and Evaluation, which ensures a level of confidence in the estimating process and the estimate produced. However, the estimate lacks accuracy because not all important practices related to this characteristic were satisfied. Specifically, while the estimate is grounded in documented assumptions (e.g., hardware refreshment every 5 years), and periodically updated to reflect changes to the program, it is not grounded in historical experience with comparable programs. As stated in our guide, estimates should be based on historical records of cost and schedule estimates from comparable programs, and such historical data should be maintained and used for evaluation purposes and future estimates on other comparable programs. The importance of doing so is evident by the fact that GCSS-MC’s cost estimate has increased by about $193 million since July 2005, which program officials attributed to, among other things, schedule delays, software development complexity, and the lack of historical data from similar ERP programs. While the program office did leverage historical cost data from other ERP programs, including the Navy’s Enterprise Resource Planning Pilot programs and programs at the Bureau of Prisons and the Department of Agriculture, program officials told us that these programs’ scopes were not comparable. For example, none of the programs had to utilize a communication architecture as complex as the Marine Corps, which officials cited as a significant factor in the cost increases and a challenge in estimating costs. The absence of analogous cost data for large-scale ERP programs is due in part to the fact that DOD has not established a standardized cost element structure for ERP programs that can be used to capture actual cost data. According to officials with the Defense Cost and Resource Center, such cost element structures are needed, along with a requirement for programs to report on their costs, but approval and resources have yet to be gained for either these structures or the reporting of their costs. Until a standardized data structure exists, programs like GCSS-MC will continue to lack a historical database containing cost estimates and actual cost experiences of comparable ERP programs. This means that the current and future GCSS-MC cost estimates will lack sufficient accuracy for effective investment decision making and performance measurement. Compounding the estimate’s limited accuracy are limitations in its credibility. Specifically, while the estimate satisfies some of the key practices for a credible cost estimate (e.g., confirming key cost drivers, performing sensitivity analyses, having an independent cost estimate prepared by the Naval Center for Cost Analysis that was within 4 percent of the program’s estimate, and conducting a risk analysis that showed a range of estimated costs of $411 million to $523 million), no risk analysis was performed to determine the program schedule’s risks and associated impact on the cost estimate. As described earlier in this report, the program has experienced about 3 years in schedule delays and recently experienced delays in completing the solution design phase. Therefore, the importance of conducting a schedule risk analysis and using the results to assess the variability in the cost estimate is critical for ensuring a credible cost estimate. Program officials agreed that the program’s schedule is aggressive and risky and that this risk was not assessed in determining the cost estimate’s variability. Without doing so, the program’s cost estimate is not credible, and thus the program is at risk of cost overruns as a result of schedule delays. Forecasting expected benefits over the life of a program is also a key aspect of economically justifying an investment. OMB guidance advocates economically justifying investments on the basis of net present value. If net present value is positive, then the corresponding benefit-to- cost ratio will be greater than 1 (and vice versa). This guidance also advocates updating the analyses over the life of the program to reflect material changes in expected benefits, costs, and risks. Since estimates of benefits can be uncertain because of the imprecision in both the underlying data and modeling assumptions used, effects of this uncertainty should be analyzed and reported. By doing this, informed investment decision making can occur through the life of the program, and a baseline can be established against which to compare the accrual of actual benefits from deployed system capabilities. The original benefit estimate for the first increment was based on questionable assumptions and insufficient data from comparable programs. The most recent economic analysis, dated January 2007, includes monetized, yearly benefit estimates for fiscal years 2010–2019 in three key areas—inventory reductions, reductions in inventory carrying costs, and improvements in maintenance processes. Collectively, these benefits totaled about $2.89 billion (not risk-adjusted). However, these calculations were made using questionable assumptions and limited data. For example, The total value of the Marine Corps inventory needed to calculate inventory reductions and reductions in carrying costs could not be determined because of limitations with existing logistic systems. The cost savings resulting from improvements in maintenance processes were calculated based on assumptions from an ERP implementation in the commercial sector that, according to program officials, is not comparable in scope to GCSS-MC. To account for the uncertainty inherent in the benefits estimate, the program office performed a Monte Carlo simulation. According to the program office, this risk analysis generated a discounted and risk-adjusted benefits estimate of $1.04 billion. As a result of the $1.85 billion adjustment to estimated benefits, the program office has a more realistic benefit baseline against which to compare the accrual of actual benefits from deployed system capabilities. The program office has elected to implement EVM, which is a proven means for measuring program progress and thereby identifying potential cost overruns and schedule delays early, when they can be minimized. In doing so, it has adopted a tailored EVM approach that focuses on schedule. However, this schedule-focused approach has not been effectively implemented because it is based on a baseline schedule that was not derived using key schedule estimating practices. According to program officials, the schedule was driven by an aggressive program completion date established in response to direction from oversight entities to complete the program as soon as possible. As a result, they said that following these practices would have delayed this completion date. Regardless, this means that the schedule baseline is not reliable, and progress will likely not track to the schedule. The program office has adopted a tailored approach to performing EVM because of the contract type being used. As noted earlier, the contract types associated with GCSS-MC integration and implementation vary, and include, for example, firm-fixed-price contracts and time-and-materials contracts. Under a firm-fixed-price contract, the price is not subject to any adjustment on the basis of the contractor’s cost experience in performing the contract. For a time-and-materials contract, supplies or services are acquired on the basis of (1) an undefined number of direct labor hours that are paid at specified fixed hourly rates and (2) actual cost for materials. According to DOD guidance, EVM is generally not encouraged for firm- fixed-price, level of effort, and time-and-material contracts. In these situations, the guidance states that programs can use a tailored EVM approach in which an integrated master schedule (IMS) is exclusively used to provide visibility into program performance. DON has chosen to implement this tailored EVM approach on GCSS-MC. In doing so, it is measuring progress against schedule commitments, and not cost commitments, using an IMS for each program phase. According to program officials, the IMS describes and guides the execution of program activities. Regardless of the approach used, effective implementation depends on having a reliable IMS. The success of any program depends in part on having a reliable schedule specifying when the program’s set of work activities will occur, how long they will take, and how they are related to one another. As such, the schedule not only provides a road map for the systematic execution of a program, but it also provides the means by which to gauge progress, identify and address potential problems, and promote accountability. Our research has identified nine practices associated with effective schedule estimating. These practices are (1) capturing key activities, (2) sequencing key activities, (3) assigning resources to key activities, (4) integrating key activities horizontally and vertically, (5) establishing the duration of key activities, (6) establishing the critical path for key activities, (7) identifying “float time” between key activities, (8) distributing reserves to high-risk activities, and (9) performing a schedule risk analysis. The current IMS for the solution design and transition-to-build phase of the first increment was developed using some of these practices. However, it does not reflect several practices that are fundamental to having a schedule baseline that provides a sufficiently reliable basis for measuring progress and forecasting slippages. To the program office’s credit, its IMS captures and sequences key activities required to complete the project, integrates the tasks horizontally, and identifies the program’s critical path. However, the program office is not monitoring the actual durations of scheduled activities so that it can address the impact of any deviations on later scheduled activities. Moreover, the schedule does not adequately identify the resources needed to complete the tasks and is not integrated vertically, meaning that multiple teams executing different aspects of the program cannot effectively work to the same master schedule. Further, the IMS does not adequately mitigate schedule risk by identifying float time between key activities, introducing schedule reserve for high-risk activities, or including the results of a schedule risk analysis. See table 5 for the results of our analyses relative to each of the nine practices. According to program officials, they intend to begin monitoring actual activity start and completion dates so that they can proactively adjust later scheduled activities that are affected by deviations. However, they do not plan to perform the three practices related to understanding and managing schedule risk because doing so would likely extend the program’s completion date, and they set this date to be responsive to direction from DOD and DON oversight entities to complete the program as soon as possible. In our view, not performing these practices does not allow the inherent risks in meeting this imposed completion date to be proactively understood and addressed. The consequence of omitting these practices is a schedule that does not provide a reliable basis for performing EVM. Well-defined and managed requirements are recognized by DOD guidance as essential and can be viewed as a cornerstone of effective system acquisition. One aspect of effective requirements management is requirements traceability. By tracing requirements both backward from system requirements to higher level business or operational requirements and forward to system design specifications and test plans, the chances of the deployed product satisfying requirements are increased, and the ability to understand the impact of any requirement changes, and thus make informed decision about such changes, is enhanced. The program office recently strengthened its requirements traceability. In November 2007, and again in February 2008, the program office was unable to demonstrate for us that it could adequately trace its 1,375 system requirements to both design specifications and test documentation. Specifically, the program office was at that time using a tool called DOORS®, which if implemented properly, allows each requirement to be linked from its most conceptual definition to its most detailed definition, as well as to design specifications and test cases. In effect, the tool maintains the linkages among requirement documents, design documents, and test cases even if requirements change. However, the system integration contractor was not using the tool. Instead the contractor was submitting its 244 work products, accompanied by spreadsheets that linked each work product to one or more system requirements and test cases. The program office then had to verify and validate the spreadsheets and import and link each work product to the corresponding requirement and test case in DOORS. Because of the sheer number of requirements and work products and its potential to impact cost, schedule, and performance, the program designated this approach as a medium risk. It later closed the risk because the proposed mitigation strategy failed to mitigate it, and it was realized as a high-priority program issue (i.e., problem). According to program officials, this requirements traceability approach resulted in time-consuming delays in approving the design work products and importing and establishing links between these products and the requirements in DOORS, in part because the work products were not accompanied by complete spreadsheets that established the traceability. As a result, about 30 percent of the contractor’s work products had yet to be validated, approved, and linked to requirements when the design phase was originally scheduled to be complete. Officials stated that the contractor was not required to use DOORS because it was not experienced with this tool and becoming proficient with it would have required time and resources, thereby increasing both the program’s cost and schedule. Ironically, however, not investing the time and resources to address the limitations in the program’s traceability approach contributed to recent delays in completing the solution design activities, and additional resources had to be invested to address its requirements traceability problems. The program office now reports that it can trace requirements backward and forward. In April 2008, we verified this by tracing 60 out of 61 randomly sampled requirements backward to system requirements and forward to approved design specifications and test plans. Program officials explained that the reason that we could not trace the one requirement was that the related work products had not yet been approved. In addition, they stated that there were additional work products that had yet to be finalized and traced. Without adequate traceability, the risk of a system not performing as intended and requiring expensive rework is increased. To address its requirements traceability weakness, program officials told us that they now intend to require the contractor to use DOORS during the next phase of the program (build and test). If implemented effectively, the new process should address previous requirements traceability weaknesses and thereby avoid a repeat of past problems. Proactively managing program risks is a key acquisition management control and, if done properly, can greatly increase the chances of programs delivering promised capabilities and benefits on time and within budget. To the program office’s credit, it has defined a risk management process that meets relevant guidance. However, it has not effectively implemented the process for all identified risks. As a result, these risks have become actual program problems that have impacted the program’s cost, schedule, and performance commitments. DOD acquisition management guidance, as well as other relevant guidance, advocates identifying facts and circumstances that can increase the probability of an acquisition’s failing to meet cost, schedule, and performance commitments and then taking steps to reduce the probability of their occurrence and impact. In brief, effective risk management consists of: (1) establishing a written plan for managing risks; (2) designating responsibility for risk management activities; (3) encouraging project-wide participation in the identification and mitigation of risks; (4) defining and implementing a process that provides for the identification, analysis, and mitigation of risks; and (5) examining the status of identified risks in program milestone reviews. The program office has developed a written plan for managing risks, and established a process that together provide for the above cited risk management practices, and it has followed many key aspects of its plan and process. For example, The Program Manager has been assigned overall responsibility for managing risks. Also, individuals have been assigned ownership of each risk, to include conducting risk analyses, implementing mitigation strategies, and working with the risk support team. The plan and process encourage project-wide participation in the identification and mitigation of risks by allowing program staff to submit a risk for inclusion in a risk database and take ownership of the risk and the strategy for mitigating it. In addition, stakeholders can bring potential risks to the Program Manager’s attention through interviews, where potential risks are considered and evaluated. The program office has thus far identified and categorized individual risks. As of December 2007, the risk database contained 27 active risks—2 high, 15 medium, and 10 low. Program risks are considered during program milestone reviews. Specifically, our review of documentation for the Design Readiness Review, a key decision point during the system development and demonstration phase leading up to a Milestone C decision, showed that key risks were discussed. Furthermore, the Program Manager reviews program risks’ status through a risk watch list and bimonthly risk briefings. However, the program office has not consistently followed other aspects of its process. For example, it did not perform key practices for identifying and managing schedule risks, such as conducting a schedule risk assessment and building in reserve time to its schedule. In addition, mitigation steps for several key risks were either not performed in accordance with the risk management strategy, or risks that were closed as having been mitigated were later found to be actual program issues (i.e., problems). For 25 medium risks in the closed risk database, as of February 2008, 4 were closed because mitigation steps were not performed in accordance with the strategy and the risks ultimately became actual issues. Examples from these medium risks are as follows: In one case, the mitigation strategy was for the contractor to deliver certain design documents that were traced to system requirements and to do so before beginning the solution build phase. The design documents, however, were not received in accordance with the mitigation strategy. Specifically, program officials told us that the design documents contained inaccuracies or misinterpretations of the requirements and were not completed on time because of the lack of resources to correct these problems. As a result, the program experienced delays in completing its solution design activities. In another case, the mitigation strategy included creating the documentation needed to execute the contract for monitoring the build phase activities. However, the mitigation steps were not performed due to, among other things, delays in approving the contractual approach. As a result, the risk became a high-priority issue in February 2008. According to a program issue report, the lack of a contract to monitor system development progress may result in unnecessary rework and thus additional program cost overruns, schedule delays, and performance shortfalls. Four of the same 25 medium risks were retired because key mitigation steps for each one were implemented, but the strategies proved ineffective, and the risks became actual program issues. Included in these 4 risks were the following: In one case, the program office closed a risk regarding data exchange with another DON system because key mitigation steps to establish exchange requirements were fully implemented. However, in February 2008, a high- priority issue was identified regarding the exchange of data with this system. According to program officials, the risk was mitigated to the fullest extent possible and closed based on the understanding that continued evaluation of data exchange requirements would be needed. However, because the risk was retired, this evaluation did not occur. In another case, a requirements management risk was closed on the basis of having implemented mitigation steps, which involved establishing a requirements management process, including having complete requirements traceability spreadsheets. However, although several of the mitigation steps were not fully implemented, the risk was closed on the basis of what program officials described as an understanding reached with the contractor regarding the requirements management process. Several months later, a high-priority issue concerning requirements traceability was identified because the program office discovered that the contractor was not adhering to the understanding. Unless risk mitigation strategies are monitored to ensure that they are fully implemented and that they produce the intended outcomes, and additional mitigation steps are taken when they are not, the program office will continue to be challenged in preventing risks from developing into actual cost, schedule, and performance problems. Effective management of programs like GCSS-MC depends in part on the ability to measure the quality of the system being acquired and implemented. Two measures of system quality are trends in (1) the number of unresolved severe system defects and (2) the number of unaddressed high-priority system change requests. GCSS-MC documentation recognizes the importance of monitoring such trends. Moreover, the program office has established processes for (1) collecting and tracking data on the status of program issues, including problems discovered during early test events, and (2) capturing data on the status of requests for changes to the system. However, its processes do not provide the full complement of data that are needed to generate a reliable and meaningful picture of trends in these areas. In particular, data on problems and change request priority levels and closure dates are either not captured or not consistently maintained. Further, program office oversight of contractor-identified issues or defects is limited. Program officials acknowledged these data limitations, but they stated that oversight of contractor-identified issues is not their responsibility. Without tracking trends in key indicators, the program office cannot adequately understand and report to DOD decision makers whether GCSS-MC’s quality and stability are moving in the right direction. Program guidance and related best practices encourage trend analysis and the reporting of system defects and program problems as measures or indicators of system quality and program maturity. As we have previously reported, these indicators include trends in the number of unresolved problems according to their significance or priority. To the program office’s credit, it collects and tracks what it calls program issues, which are problems identified by program office staff or the system integrator that are process, procedure, or management related. These issues are contained in the program’s Issues-Risk Management Information System (I-RMIS). Among other things, each issue in I-RMIS is to have an opened and closed date and an assigned priority level of high, medium, or low. In addition, the integration contractor tracks issues that its staff identifies related to such areas as system test defects. These issues are contained in the contractor’s Marine Corps Issue Tracking System (MCITS). Each issue in MCITS is to have a date when it was opened and is to be assigned a priority on a scale of 1-5. According to program officials, the priority levels are based on guidance from the Institute of Electrical and Electronics Engineers (IEEE). (See table 6 for a description of each priority level.) However, neither I-RMIS nor MCITS contain all the data needed to reliably produce key measures or indicators of system quality and program maturity. Examples of these limitations are as follows: For I-RMIS, the program office has not established a standard definition of the priority levels used. Rather, according to program officials, each issue owner is allowed to assign a priority based on the owner’s definition of what high, medium, and low mean. By not using standard priority definitions for categorizing issues, the program office cannot ensure that it has an accurate and useful understanding of the problems it is facing at any given time, and it will not know if it is addressing the highest priority issues first. For MCITS, the integration contractor does not track closure dates for all issues. For example, as of April 2008, over 30 percent of the closed issues did not have closure dates. This is important because it limits the contractor’s ability to understand trends in the number of high-priority issues that are unresolved. Program officials acknowledged the need to have closure dates for all closed issues and stated that they intend to correct this. If it is not corrected, the program office will not be able to create a reliable measure of system quality and program maturity. Compounding the above limitations in MCITS data is the program office’s decision not to use contractor-generated reports that are based on MCITS data. Specifically, reports summarizing MCITS issues are posted to a SharePoint site for the program office to review. However, program officials stated that they do not review these reports because the MCITS issues are not their responsibility, but the contractor’s. However, without tracking and monitoring contractor-identified issues, which include such things as having the right skill-sets and having the resources to track and monitor issues captured in separate databases, the program office is missing an opportunity to understand whether proactive action is needed to address emerging quality shortfalls in a timely manner. Program guidance and related best practices encourage trend reporting of change requests as measures or indicators of system stability and quality. These indicators include trends in the number and priority of approved changes to the system’s baseline functional and performance capabilities that have yet to be resolved. To its credit, the program office collects and tracks changes to the system, which can range from minor or administrative changes to more significant changes that propose or impact important system functionality. These changes can be identified by either the program office or the contractor, and they are captured in a master change request spreadsheet. Further, the changes are to be prioritized according to the level described in table 7, and the dates that change requests are opened and closed are to be recorded. However, the change request master spreadsheet does not contain the data needed to reliably produce key measures or indicators of system stability and quality. Examples of these limitations are as follows: The program office has not prioritized proposed changes or managed these changes according to their priorities. For example, of the 572 change requests as of April 2008, 171 were assigned a priority level, and 401 were not. Of these 171, 132 were categorized as priority 1. Since then, the program office has temporarily recategorized the 401 change requests to priority 3 until each one’s priority can be evaluated. The program office has yet to establish a time frame for doing so. The dates that change requests are resolved are not captured in the master spreadsheet. Rather, program officials said that these dates are in the program’s IMS and are shown there as target implementation dates. While the IMS does include the dates changes will be implemented, these dates are not actual dates, and they are not used to establish trends in unresolved change requests. Without the full complement of data needed to monitor and measure change requests, the program office cannot know and disclose to DOD decision makers whether the quality and stability of the system are moving in the right direction. DOD’s success in delivering large-scale business systems, such as GCSS- MC, is in large part determined by the extent to which it employs the kind of rigorous and disciplined IT management controls that are reflected in DOD policies and related guidance. While implementing these controls does not guarantee a successful program, it does minimize a program’s exposure to risk and thus the likelihood that it will fall short of expectations. In the case of GCSS-MC, living up to expectations is important because the program is large, complex, and critical to supporting the department’s warfighting mission. The department has not effectively implemented a number of essential IT management controls on GCSS-MC, which has already contributed to significant cost overruns and schedule delays, and has increased the program’s risk going forward of not delivering a cost-effective system solution and not meeting future cost, schedule, capability, and benefit commitments. Moreover, GCSS-MC could be duplicating the functionality of related systems and may be challenged in interoperating with these systems because compliance with key aspects of DOD’s federated BEA has not been demonstrated. Also, the program’s estimated return on investment, and thus the economic basis for pursing the proposed system solution, is uncertain because of limitations in how the program’s cost estimate was derived, raising questions as to whether the nature and level of future investment in the program needs to be adjusted. In addition, the program’s schedule was not derived using several key schedule estimating practices, which impacts the integrity of the cost estimate and precludes effective implementation of EVM. Without effective EVM, the program cannot reliably gauge progress of the work being performed so that shortfalls can be known and addressed early, when they require less time and fewer resources to overcome. Another related indicator of progress, trends in system problems and change requests, also cannot be gauged because the data needed to do so are not being collected. Collectively, these weaknesses have already helped to push back the completion of the program’s first increment by more than 3 years and added about $193 million in costs, and they are introducing a number of risks that, if not effectively managed, could further impact the program. However, whether these risks will be effectively managed is uncertain because the program has not always followed its defined risk management process and, as a result, has allowed yesterday’s potential problems to become today’s actual cost, schedule, and performance problems. While the program office is primarily responsible for ensuring that effective IT management controls are implemented on GCSS-MC, other oversight and stakeholder organizations share some responsibility. In particular, even though the program office has not demonstrated its alignment with the federated BEA, it nevertheless followed established DOD architecture compliance guidance and used the related compliance assessment tool in assessing and asserting its compliance. The root cause for not demonstrating compliance thus is not traceable to the program office, but rather is due to, among other things, the compliance guidance and tool being limited, and the program’s oversight entities not validating the compliance assessment and assertion. Also, even though the program’s cost estimate was not informed by the cost experiences of other ERP programs of the same scope, the program office is not to blame because the root cause for this is that the Defense Cost and Resource Center has not maintained a standardized cost element structure for its ERP programs and a historical database of ERP program costs for program’s like GCSS- MC to use. In contrast, other weaknesses are within the program office’s control, as evidenced by its positive actions to address the requirements traceability shortcomings that we brought to its attention during of the course of our work and its well-defined risk management process. All told, this means that addressing the GCSS-MC IT management control weaknesses require the combined efforts of the various DOD organizations that share responsibility for defining, justifying, managing, and overseeing the program. By doing so, the department can better assure itself that GCSS-MC will optimally support its mission operations and performance goals and will deliver promised capabilities and benefits, on time and within budget. To ensure that each GCSS-MC system increment is economically justified on the basis of a full and reliable understanding of costs, benefits, and risks, we recommend that the Secretary of Defense direct the Secretary of the Navy to ensure that investment in the next acquisition phase of the program’s first increment is conditional upon fully disclosing to program oversight and approval entities the steps under way or planned to address each of the risks discussed in this report, including the risk of not being architecturally compliant and being duplicative of related programs, not producing expected mission benefits commensurate with reliably estimated costs, not effectively implementing EVM, not mitigating known program risks, and not knowing whether the system is becoming more or less mature and stable. We further recommend that investment in all future GCSS-MC increments be limited if the management control weaknesses that are the source of these risks, and which are discussed in this report, have not been fully addressed. To address each of the IT management control weaknesses discussed in this report, we are also making a number of additional recommendations. However, we are not making recommendations for the architecture compliance weaknesses discussed in this report because we have a broader review of DON program compliance to the BEA and DON enterprise architecture that will be issued shortly and will contain appropriate recommendations. To improve the accuracy of the GCSS-MC cost estimate, as well as other cost estimates for the department’s ERP programs, we recommend that the Secretary of Defense direct the appropriate organization within DOD to collaborate with relevant organizations to standardize the cost element structure for the department’s ERP programs and to use this standard structure to maintain cost data for its ERP programs, including GCSS-MC, and to use this cost data in developing future cost estimates. To improve the credibility of the GCSS-MC cost estimate, we recommend that the Secretary of Defense direct the Secretary of the Navy, through the appropriate chain of command, to ensure that the program’s current economic analysis is adjusted to reflect the risks associated with it not reflecting cost data for comparable ERP programs, and otherwise not having been derived according to other key cost estimating practices, and that future updates to the GCSS-MC economic analysis similarly do so. To enhance GCSS-MC’s use of EVM, we recommend that the Secretary of Defense direct the Secretary of the Navy, through the appropriate chain of command, to ensure that the program office (1) monitors the actual start and completion dates of work activities performed so that the impact of deviations on downstream scheduled work can be proactively addressed; (2) allocates resources, such as labor hours and material, to all key activities on the schedule; (3) integrates key activities and supporting tasks and subtasks; (4) identifies and allocates the amount of float time needed for key activities to account for potential problems that might occur along or near the schedule’s critical path; (5) performs a schedule risk analysis to determine the level of confidence in meeting the program’s activities and completion date; (6) allocates schedule reserve for high-risk activities on the critical path; and (7) discloses the inherent risks and limitations associated with any future use of the program’s EVM reports until the schedule has been risk-adjusted. To improve GCSS-MC management of program risks, we recommend that the Secretary of Defense direct the Secretary of the Navy, through the appropriate chain of command, to ensure that the program office (1) adds each of the risks discussed in this report to its active inventory of risks, (2) tracks and evaluates the implementation of mitigation plans for all risks, (3) discloses to appropriate program oversight and approval authorities whether mitigation plans have been fully executed and have produced the intended outcome(s), and (4) only closes a risk if its mitigation plan has been fully executed and produced the intended outcome(s). To strengthen GCSS-MC system quality measurement, we recommend that the Secretary of Defense direct the Secretary of the Navy, through the appropriate chain of command, to ensure that the program office (1) collects the data needed to develop trends in unresolved system defects and change requests according to their priority and severity and (2) discloses these trends to appropriate program oversight and approval authorities. In written comments on a draft of this report, signed by the Deputy Under Secretary of Defense (Business Transformation) and reprinted in appendix II, the department stated that it concurred with two of our recommendations and partially concurred with the remaining five. In general, the department partially concurred because it said that efforts were either under way or planned that will address some of the weaknesses that these recommendations are aimed at correcting. For example, the department stated that GCSS-MC will begin to use a recently developed risk assessment tool that is expected to assist programs in identifying and mitigating internal and external risks. Further, it said that these risks will be reported to appropriate department decision makers. We support the efforts that DOD described in its comments because they are generally consistent with the intent of our recommendations and believe that if they are fully and properly implemented, they will go a long way in addressing the management control weaknesses that our recommendations are aimed at correcting. In addition, we have made a slight modification to one of these five recommendations to provide the department with greater flexibility in determining which organizations should provide for the recommendation’s implementation. We are sending copies of this report to interested congressional committees; the Director, Office of Management and Budget; the Congressional Budget Office; the Secretary of Defense; and the Department of Defense Office of the Inspector General. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3439 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Our objective was to determine whether the Department of the Navy is effectively implementing information technology management controls on the Global Combat Support System-Marine Corps (GCSS-MC). To accomplish this, we focused on the first increment of GCSS-MC relative to the following management areas: architectural alignment, economic justification, earned value management, requirements management, risk management, and system quality measurement. In doing so, we analyzed a range of program documentation, such as the acquisition strategy, program management plan, and Acquisition Program Baseline, and interviewed cognizant program officials. To determine whether GCSS-MC was aligned with the Department of Defense’s (DOD) federated business enterprise architecture (BEA), we reviewed the program’s BEA compliance assessments and system architecture products, as well as versions 4.0, 4.1, and 5.0 of the BEA and compared them with the BEA compliance requirements described in the Fiscal Year 2005 National Defense Authorization Act and DOD’s BEA compliance guidance and evaluated the extent to which the compliance assessments addressed all relevant BEA products. We also determined the extent to which the program-level architecture documentation supported the BEA compliance assessments. We obtained documentation, such as the BEA compliance assessments from the GCSS-MC and Navy Enterprise Resource Planning programs, as well as the Air Force’s Defense Enterprise Accounting and Management System and Air Force Expeditionary Combat Support System programs. We then compared these assessments to identify potential redundancies or opportunities for reuse and determined if the compliance assessments examined duplication across programs and if the tool that supports these assessments is being used to identify such duplication. In doing so, we interviewed program officials and officials from the Department of the Navy, Office of the Chief Information Officer, and reviewed recent GAO reports to determine the extent to which the programs were assessed for compliance against the Department of the Navy enterprise architecture. We also interviewed program officials and officials from the Business Transformation Agency and the Department of the Navy, including the logistics Functional Area Manager, and obtained guidance documentation from these officials to determine the extent to which the compliance assessments were subject to oversight or validation. To determine whether the program had economically justified its investment in GCSS-MC, we reviewed the latest economic analysis to determine the basis for the cost and benefit estimates. This included evaluating the analysis against Office of Management and Budget guidance and GAO’s Cost Assessment Guide. In doing so, we interviewed cognizant program officials, including the Program Manager and cost analysis team, regarding their respective roles, responsibilities, and actual efforts in developing and/or reviewing the economic analysis. We also interviewed officials at the Office of Program Analysis and Evaluation and Naval Center for Cost Analysis as to their respective roles, responsibilities, and actual efforts in developing and/or reviewing the economic analysis. To determine the extent to which the program had effectively implemented earned value management, we reviewed relevant documentation, such the contractor’s monthly status reports, Acquisition Program Baselines, and schedule estimates and compared them with DOD policies and guidance. We also reviewed the program’s schedule estimates and compared them with relevant best practices to determine the extent to which they reflect key estimating practices that are fundamental to having a reliable schedule. In doing so, we interviewed cognizant program officials to discuss their use of best practices in creating the program’s current schedule. To determine the extent to which the program implemented requirements management, we reviewed relevant program documentation, such as the baseline list of requirements and system specifications and evaluated them against relevant best practices to determine the extent to which the program has effectively managed the system’s requirements and maintained traceability backward to high-level business operation requirements and system requirements, and forward to system design specifications, and test plans. To determine the extent to which the requirements were traceable, we randomly selected 61 program requirements and traced them both backward and forward. This sample was designed with a 5 percent tolerable error rate at the 95 percent level of confidence, so that, if we found 0 problems in our sample, we could conclude statistically that the error rate was less than 5 percent. Based upon the weight of all other factors included in our evaluation, our verification of 60 out of 61 requirements was sufficient to demonstrate traceability. In addition, we interviewed program officials involved in the requirements management process to discuss their roles and responsibilities for managing requirements. To determine the extent to which the program implemented risk management, we reviewed relevant risk management documentation, such as risk plans and risk database reports demonstrating the status of the program’s major risks and compared the program office’s activities with DOD acquisition management guidance and related best practices. We also reviewed the program’s mitigation process with respect to key risks, including 25 medium risks in the retired risk database that were actively addressed by the program office, to determine the extent to which these risks were effectively managed. In doing so, we interviewed cognizant program officials responsible, such as the Program Manager, Risk Manager, and subject matter experts to discuss their roles and responsibilities and obtain clarification on the program’s approach to managing risks associated with acquiring and implementing GCSS-MC. To determine the extent to which the program is collecting the data and monitoring trends in the number of unresolved system defects and the number of unaddressed change requests, we reviewed program documentation such as the testing strategy, configuration management policy, test defect reports, change request logs, and issue data logs. We compared the program’s data collection and analysis practices relative to these areas to program guidance and best practices to determine the extent to which the program is measuring important aspects of system quality. We also interviewed program officials such as system developers, relevant program management staff, and change control managers to discuss their roles and responsibilities for system quality measurement. We conducted our work at DOD offices and contractor facilities in the Washington, D.C., metropolitan area, and Triangle, Va., from June 2007 to July 2008, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objective. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objective. In addition to the individual named above, key contributors to this report were Neelaxi Lakhmani, Assistant Director; Monica Anatalio; Harold Brumm; Neil Doherty; Cheryl Dottermusch; Nancy Glover; Mustafa Hassan; Michael Holland; Ethan Iczkovitz; Anh Le; Josh Leiling; Emily Longcore; Lee McCracken; Madhav Panwar; Karen Richey; Melissa Schermerhorn; Karl Seifert; Sushmita Srikanth; Jonathan Ticehurst; Christy Tyson; and Adam Vodraska. | GAO has designated the Department of Defense's (DOD) business systems modernization as a high-risk program because, among other things, it has been challenged in implementing key information technology (IT) management controls on its thousands of business systems. The Global Combat Support System-Marine Corps program is one such system. Initiated in 2003, the program is to modernize the Marine Corps logistics systems. The first increment is to cost about $442 million and be deployed in fiscal year 2010. GAO was asked to determine whether the Department of the Navy is effectively implementing IT management controls on this program. To accomplish this, GAO analyzed the program's implementation of several key IT management disciplines, including economic justification, earned value management, risk management, and system quality measurement. DOD has not effectively implemented key IT management controls provided for in DOD and related acquisition guidance on this program. If implemented effectively, these and other IT management disciplines increase the likelihood that a given system investment will produce the right solution to fill a mission need and that this system solution will be acquired and deployed in a manner that maximizes the chances of delivering promised system capabilities and benefits on time and within budget. Neither of these outcomes is being fully realized on this program, as evidenced by the fact that its first increment has already slipped more than 3 years and is expected to cost about $193 million more than envisioned. These slippages and cost overruns can be attributed in part to the management control weaknesses discussed in this report and summarized below. Moreover, additional slippages and overruns are likely if these and other IT management weaknesses are not addressed. Investment in the system has not been economically justified on the basis of reliable estimates of both benefits and costs. Specifically, while projected benefits were risk-adjusted to compensate for limited data and questionable assumptions, the cost side of the benefit/cost equation is not sufficiently reliable because it was not derived in accordance with key cost estimating practices. In particular, it was not based on historical data from similar programs and it did not account for schedule risks, both of which are needed for the estimate to be considered accurate and credible. Earned value management that the program uses to measure progress has not been adequately implemented. Specifically, the schedule baseline against which the program gauges progress is not based on key estimating practices provided for in federal guidance, such as assessing schedule risks and allocating schedule reserves to address these risks. As a result, program progress cannot be adequately measured, and likely program completion dates cannot be projected based on actual work performed. Some significant program risks have not been adequately managed. While a well-defined risk management plan and supporting process have been put in place, the process has not always been followed. Specifically, mitigation steps for significant risks either have not been implemented or proved ineffective, allowing the risks to become actual problems. The data needed to produce key indicators of system quality, such as trends in the volume of significant and unresolved problems and change requests, are not being collected. Without such data, it is unclear whether the system is becoming more or less mature and stable. The reasons for these weaknesses range from limitations of DOD guidance and tools, to not collecting relevant data. Until they are addressed, DOD is at risk of delivering a solution that does not cost-effectively support mission operations and falls short of cost, schedule, and capability expectations. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The term “day trading” has various definitions. In 1999, day trading was commonly described as a trading strategy that involved making multiple purchases and sales of the same securities throughout the day in an attempt to profit from short-term price movements. Since that time, the definition has evolved. For example, NASDR and NYSE use two definitions of day trading in the recent amendments to their margin rules. First, NYSE Rule 431(f)(8)(B)(I) and NASDR Rule 2520(f)(8)(b) generally define day trading as “the purchasing and selling or the selling and purchasing of the same security in the same day in a margin account.” Second, both NYSE and NASD define a “pattern” day trader as a customer who executes four or more day trades within 5 business days, unless the number of day trades does not exceed 6 percent of their total trading activity for that period. Additionally, NASDR’s rule on approval procedures for day trading accounts defines a day trading strategy as “an overall trading strategy characterized by the regular transmission by a customer of intra-day orders to affect both purchase and sale transactions in the same security or securities.” In this report, we define day trading as consistently both buying and selling the same securities intraday via direct access technology to take advantage of short-term price movements. Day trading firms use sophisticated order routing and execution systems technology that allows traders to monitor and access the market on a real- time basis. This technology allows traders direct access to stock markets through Nasdaq Level II screens that display real-time best bid (buy) and ask (sell) quotes for any Nasdaq or over-the-counter security, including quotes between market makers trading for their own inventories. Day traders also conduct transactions through electronic communications networks (ECNs), which allow customers’ orders to be displayed to other customers and allow customers’ orders to be paired. As a result of this technology, day traders have the tools to trade from their own accounts without an intermediary such as a stock broker and can employ techniques that were previously available only to market makers and professional traders. Both rules define day trading as the purchasing and the selling or the selling and purchasing of the same security on the same day in a margin account. There are two exceptions to this definition: first, a long security position held overnight and sold the next day prior to any new purchase of the same security; and second, a short security position held overnight and purchased the next day prior to any new sale of the same security. Day trading firms register with SEC and become members of one of the SROs, such as NASD or the Philadelphia Stock Exchange; they are therefore subject to regulation by SEC and an SRO. As registered broker- dealers, day trading firms are required to comply with all pertinent federal securities laws and SRO rules. SROs generally examine every broker- dealer anywhere from annually to up to every four years, depending on the type of firm. Day trading firms are also subject to the securities laws and oversight of the states in which they are registered. In 1999, state and federal regulators began to identify concerns about certain day trading firms’ activities. In 1999, state regulators examined and initiated disciplinary action against several day trading firms and identified several areas of concern. SEC completed an examination sweep of 47 day trading firms in 1999 and subsequently issued a report. According to SEC’s report, the examinations did not reveal widespread fraud, but examiners found indications of serious violations of securities laws related to net capital, margin, and customer lending. However, most of the examinations revealed less serious violations and concluded that many firms needed to take steps to improve their compliance with net capital, short sale, and supervision rules. NASDR also initiated a series of focused examinations of day-trading firms that focused on the firms’ advertising and risk disclosures, among other areas. SEC and NASDR also initiated several enforcement actions against day trading firms and individuals in early 2000. In our 2000 report, we found that day trading among less-experienced traders was an evolving segment of the securities industry. Day traders represented less than one-tenth of 1 percent, or about 1 out of 1,000, of all individuals who bought or sold securities. However, day trading was estimated by some to account for about 15 percent of Nasdaq’s trading volume. Although no firm estimates exist for the number of active day traders, many regulatory and industry officials we spoke with generally thought 5,000 was a reasonable estimate and believed the number was stable or had gone down slightly. However, the number of open accounts at day trading firms is likely much higher. We also noted in our 2000 report that before 1997, day traders submitted most of their orders through the Small Order Execution System (SOES). We concluded that the effects of day trading in an environment that depends less on SOES and more on ECNs are uncertain. Because of these findings and our work in this area, we recommended that after decimal trading is implemented, SEC should evaluate the implications of day traders’ growing use of ECNs on the integrity of the markets. We also recommended that SEC do an additional cycle of targeted examinations of day trading firms to ensure that the firms take the necessary corrective actions proposed in response to previous examination findings. Concerns about day trading culminated in hearings before the Permanent Subcommittee on February 24 and 25, 2000, and the ultimate issuance of a report by the Permanent Subcommittee in July 2000. The Permanent Subcommittee expressed its concerns about certain industry practices at the hearing and made several recommendations in its subsequent report. In general, the recommendations suggested changes to NASDR’s disclosure rules and margin rule amendments and summarized comments Permanent Subcommittee Members had submitted to SEC when those rules were published for comment in the Federal Register. In addition, the Permanent Subcommittee recommended that NASDR prohibit firms from arranging loans between customers to meet margin requirements and that firms be required to develop policies to ensure that individual day traders acting as investment advisors are properly registered. Since 1999, day traders as a group and firms that offer day trading capability have continued to evolve. Most regulators and industry officials we spoke with said that day traders are generally more experienced and that fewer customers are quitting their jobs to become day traders. We also found that many day trading firms now market to institutional customers, such as hedge funds and money market managers, rather than focusing on retail customers. In addition, more day trading firms are likely to engage in proprietary trading through professional traders who trade the firms’ capital rather than their own and earn a percent of the profits. Finally, we found that traditional and on-line brokers and other entities that want to offer their customers direct access to securities markets are acquiring day trading firms. A concern raised in 1999 was that day trading firms were marketing to inexperienced traders who did not fully understand the risks of day trading and therefore lost substantial amounts of money. Some industry and regulatory officials said the combination of intense regulatory scrutiny and adverse market conditions in late 2000 and into 2001 have driven many unsophisticated traders out of day trading. Traders currently engaged in day trading are more likely to be experienced and to have a greater knowledge of the risks involved than traders in 1999. Industry officials said that many traders gained their experience by day trading for several years, while others were professional traders who became day traders. During our first review, regulatory and government officials were particularly concerned that day trading firms were attracting customers who were ill-suited for day trading because they lacked either the capital or the knowledge to engage in such a risky activity. Since 1999, day trading firms have begun to focus on institutional as well as retail customers, including hedge funds and small investment management companies. According to press reports, All-Tech Direct, Inc., a day trading firm, announced in August 2001 that it planned to get out of the retail business completely and was severing its relationship with all of its retail branches. Overall, institutional investors are increasingly interested in the kind of high-speed order execution that day traders get from direct access systems and the relatively low fees day traders pay to execute trades. In addition, some day trading firms that focused solely on retail customers in 1999 have since hired professional traders who trade the firms’ capital (proprietary traders). For some, this move reflects a departure from their retail customer focus. A few officials said many of their retail customers started as proprietary traders and learned to trade by using the firm’s capital rather than their own. Another change involves the growth in the number of day trading firms being acquired by other brokerages and in market participants that want the direct access technology. For example, since 1999 on-line brokers Charles Schwab and Ameritrade have purchased CyberCorp. (former CyberTrader) and Tradecast, respectively. Likewise, in August 2001 T.D. Waterhouse Group Inc. announced plans to purchase one of the smaller day trading firms, R.J. Thompson Holdings. In addition, Instinet, an ECN, purchased ProTrader as a way to offer direct access technology to its customers. Moreover, financial conglomerates are also moving toward offering fully integrated services, which include all aspects of a securities purchase, from direct access to securities markets to clearing capabilities. In September 2000, Goldman Sachs announced its planned acquisition of Spear Leeds & Kellogg, which offers such fully integrated services. Other firms with fully integrated capabilities include on-line brokerages such as Ameritrade and Datek, as well as an ECN, Instinet. Some regulatory and industry officials said that they expect traditional and discount brokerages to continue to acquire day trading firms, as these brokerages face increased pressure to provide direct market access to their more active traders (estimated at between 50,000 and 75,000). Some analysts also said that the growing trend toward direct access has been driven not only by competitive pressure but also by SEC’s new disclosure rules on order handling and trade execution, which require ECNs, market makers, and specialists to report execution data on order sizes, speed, and unfilled orders. In addition, by the end of November 2001 brokers are required to disclose the identity of the market centers to which they route a significant percentage of their orders and the nature of the broker’s relationships with these market centers, including any payment for order flow. By offering customers direct access to markets, the customer rather than the broker determines where trades are executed. Since our 2000 review, SEC and the SROs have taken various actions involving day trading activities. Specifically, NASDR has adopted rules that require firms to provide customers with a risk disclosure statement and to approve the customer’s account for day trading. In addition, NASDR and NYSE have amended their margin rules for day traders to impose more restrictive requirements for pattern day traders. NASDR’s margin rule amendments became effective on September 28, 2001, and NYSE’s became effective on August 27, 2001. SEC and the SROs have also continued to monitor and examine day trading firms and their activities to ensure compliance with securities laws. Finally, SEC and NASDR have settled several pending enforcement cases involving day trading securities firms and their principals. In 2000 and 2001, the SROs adopted day trading rules related to improved risk disclosure and stricter margin requirements. On July 10, 2000, SEC approved NASDR Rule 2360, Approval Procedures for Day-Trading Accounts, which requires firms that promote a day trading strategy to either 1) approve the customer’s account for a day trading strategy or 2) obtain from the customer a written agreement that the customer does not intend to use the account for day trading purposes. SEC also approved NASDR Rule 2361, Day-Trading Risk Disclosure Statement, which requires firms that promote a day trading strategy to furnish a risk-disclosure statement that discusses the unique risks of day trading to customers prior to opening an account. The new rules became effective on October 16, 2000. NASDR Rule 2361 provides a disclosure statement that, among other things, warns investors that day trading can be risky and is generally not appropriate for someone with limited resources, little investment or trading experience, or tolerance for risk (see table 1). The statement further maintains that evidence suggests that an investment of less than $50,000 significantly affects the ability of a day trader to make a profit. The disclosure statement contained in NASDR Rule 2361 incorporated many of the recommendations the Permanent Subcommittee Members made in a comment letter to SEC and subsequently summarized in its July 27, 2000, report. The italicized text in table 1 generally represents the Permanent Subcommittee’s recommended changes that NASDR adopted. Although many of the Permanent Subcommittee’s recommendations were incorporated into the final disclosure statement, NASDR did not adopt all of them. For example, NASDR did not directly adopt the Permanent Subcommittee’s recommendations that firms presume that customers who open accounts with less than $50,000 are generally inappropriate for day trading or that firms be required to prepare and maintain records setting forth the reasons why customers with less than $50,000 are considered appropriate for day trading. Instead, NASDR incorporated the Permanent Committee’s concern about the significance of the $50,000 threshold into the disclosure statement. NASDR decided not to directly incorporate these recommendations for several reasons. First, it believed that a $50,000 threshold might make sense for some investors but could be too high or too low for others. Second, NASDR was concerned that such a requirement could encourage investors to inflate the value of their assets. Lastly, NASDR’s rule (as proposed) already required a firm to document the basis on which it approved an account for day trading. In February 2001, SEC approved substantially similar amendments to NASDR and NYSE rules proposing more restrictive margin requirements for day traders. Prior to the adoption of the NASDR and NYSE amendments, margin requirements were calculated on the basis of a customer’s open positions at the end of the trading day. A day trader often has no open positions at the end of the day on which a margin calculation can be based. However, the day trader and the firm are at financial risk throughout the day if credit is extended. To address that risk, the NASDR and NYSE rule amendments require “pattern day traders” to demonstrate that they have the ability to meet a special maintenance margin requirement for at least their largest open position during the day. Customers who meet the definition of pattern day trader under the rules must generally deposit 25 percent of the largest open position into their accounts. Both rule amendments require customers who meet the definition of a pattern day trader to have minimum equity of $25,000 in their accounts. Funds deposited into these accounts to meet the minimum equity requirement must remain there for a minimum of 2 business days following the close of business on the day a deposit was required. In addition, the rule amendments permit day trading buying power of up to four times excess margin and impose a day trading margin call on customers who exceed their day trading buying power. In addition, until the margin call is met, day trading accounts are restricted to day trading buying power of two times excess margin, calculated on the basis of the cost of all day trades made during the day. If the margin call is not met by the 5th business business day, day traders are limited to trading on a cash- available basis for 90 days or until the call is met. Funds deposited in an account to meet a day trading margin call must also remain in the account for 2 business days. The rule amendments also prohibit cross-guarantees to meet day trading minimum equity requirements or day trading margin calls. These more stringent margin requirements respond to concerns raised about the risks day trading can pose to traders, firms, and securities markets in general. The amendments as finalized do not fully incorporate the Permanent Subcommittee’s recommendation that the minimum equity requirement be raised from $2,000 to $50,000. Instead, SEC approved a $25,000 minimum. NASDR believes that a $25,000 minimum equity requirement will provide “protection against continued losses in day trading accounts, while refraining from excessive restrictions on day traders with limited capital.” Moreover, both NASDR and NYSE said that broker-dealers have the option of increasing the minimum requirement based on their own policies and procedures. The Permanent Subcommittee also recommended that the margin ratio not be increased to four times excess equity from its previous level of two times. NASDR and NYSE disagreed with this proposed change, because allowing day traders to trade at a 4:1 ratio brings day trading accounts into parity with ordinary NASDR and NYSE maintenance margin account requirements, which are 25 percent, or 4:1. Moreover, officials said the change was appropriate when considered in conjunction with the other changes to the margin rules, such as the increased minimum equity requirement, the immediate consequences imposed if day trading buying power is exceeded, and the 2-day holding period for funds used to meet day trading margin requirements. The Permanent Subcommittee also recommended that NASDR propose a rule prohibiting firms from arranging loans between customers to meet margin calls. NASDR is continuing to review this issue but has not proposed rules that directly address firms’ involvement in arranging such loans. However, industry officials believe that the new margin rules indirectly address this issue because the amendments will make such lending arrangements less attractive to lenders. For example, as mentioned previously, funds deposited to meet a margin call must be left in a trader’s account for two full business days following the close of business on any day when a deposit is required, substantially increasing the risks to the lender. Previously, funds could be held in an account overnight to meet the margin call requirement. Consistent with our 2000 report recommendation, SEC has continued to examine the activities of day trading firms. Specifically, since SEC’s initial sweep of 47 day trading firms from October 1998 to September 1999 and subsequent report, SEC, NASDR, and Philadelphia Stock Exchange staff have conducted examinations of all the 133 day trading firms that were identified in 2000. In addition, SEC and the SROs have done follow-up examinations to determine whether the previous violations have been corrected. Moreover, NASDR officials said they prepared a special examination module for these follow-up examinations that focused on identified problem areas. According to SEC, in 2001 and 2002, SRO staff will continue to conduct routine examinations of existing day trading firms and of newly registered firms to determine compliance with applicable rules. For example, NASDR officials said that they are no longer prioritizing day trading firms for review; instead, these firms are now examined during the routine broker-dealer examination cycle or when they first register. As of August 2001, NASDR had completed about 62 such examinations. In addition, SEC said that it would continue to initiate cause examinations when appropriate. From late 1999 to early 2001, almost half of the day trading firm examinations completed by SEC were cause examinations. According to SEC and NASDR officials, day trading firms’ overall compliance with rules has improved since the 1999 sweep. Officials said that while the examinations revealed violations of margin rules, short sale rule violations, misleading advertisements, and net capital deficiencies, these types of violations were occurring less frequently. SEC also identified violations of SRO and SEC rules related to supervision, maintenance of books and records, and the net capital calculation. SEC and NASDR officials said that net capital and supervision violations are not uncommon among broker-dealers in general. We reviewed 42 SEC and 62 NASDR examination reports completed between the end of the 1999 sweep and August 2001 that looked at broker- dealers and their branches offering day trading as a strategy. Overall, written supervisory procedure failures were the most frequent violation, followed by net capital rule miscalculations. Table 2 shows the number of examinations that included violations in each area. However, many of the violations cited in the examination reports were violations that are often cited at all types of broker-dealers and were not directly related to the firm’s day trading activity, which in some cases was a small part of the firm’s overall operation. Common supervisory procedure violations involved failure to have adequate written procedures that reflect the types of business in which the firm engages. For example, some broker-dealers had added day trading to their offered services but had not changed their written supervisory procedures to address this new activity. Other firms were cited for failure to follow their internal supervisory procedures. Many of the net capital rule violations involved calculation and reporting errors. Compared with the written supervisory procedure and net capital rule violations, fewer examinations had short sale, advertising, and margin and customer-lending rule violations. The short sale rule violations included failing to properly indicate trades as “short” (sale) or “long” (purchase), effecting short sales below the price at which the last sale was reported or on a zero-minus tick, and improperly marking short orders as long without first making an affirmative determination that the securities were in the trader’s account or ready to be delivered prior to settlement. Although examiners continued to find some advertising violations involving omissions of fact and misstatements, many of the violations involved failure to properly maintain advertising files and other documentation requirements. For example, firms were cited for failure to document advertising approvals and make required submissions to NASDR. The customer lending and margin violations involved failure to secure additional funds to cover margin calls and allowing traders to trade when the Regulation T margin requirement had not been met. Numerous other deficiencies were also cited, including failure to inform customers who access SelectNet that NASD monitors trading activity and that the customers can be subject to prosecution for violations of securities laws, improper registration issues such as failure to properly register branches, and improper registration of traders. Of the SEC examinations reviewed, 34 resulted in deficiency or violation letters, 3 indicated that no violations had been found, and 7 resulted in a referral to an SRO or to SEC’s Division of Enforcement. Of the NASDR examinations we reviewed, 39 resulted in a letter of caution, 5 resulted in a compliance conference, 12 were filed without action, and at least 2 resulted in formal complaints or referrals to SEC or NASDR Enforcement. Since the enforcement actions announced in February 2000, NASDR and SEC have settled several disciplinary actions against day trading firms and their principals, including fines, civil money penalties, censures, and the expulsion of one firm from the business. SEC brought several enforcement actions related to day trading in June 2001. First, SEC instituted and settled proceedings against JPR Capital Corporation and several of the firm’s current and former executives. SEC found that the firm had violated federal margin lending rules, among other things. All of the respondents to the proceedings consented to SEC’s order without admitting or denying the allegations, agreed to pay civil money penalties, and consented to other relief. The firm was censured and ordered to pay a civil penalty of $55,000 to “cease and desist” from committing or causing any violations of specified laws and rules and to comply with initiatives designed to improve its own compliance department. Second, SEC settled its previously instituted proceeding against All-Tech Direct, Inc. and certain of its employees for extending loans to customers in excess of limits allowed under federal margin rules. SEC censured All-Tech Direct and ordered the firm to cease and desist from committing or causing any violations of the federal margin lending rules, to pay a $225,000 civil penalty, and to retain an independent consultant selected by SEC to review and recommend improvements to All-Tech Direct’s margin lending practices. As shown in table 3, NASDR also announced enforcement actions in June 2001 against six firms and several individuals that addressed violations of federal securities laws and NASDR rule violations in the following areas: advertising, registration, improper loans to customers, improper sharing of commissions, short sale rules, trade reporting, and deficient supervisory procedures. Without admitting or denying the allegations, the firms and individuals agreed to the sanctions, which included censures, the expulsion of one firm, suspensions, and fines against the firms and individuals ranging from $5,000 to $250,000. According to NASDR officials, these settlements resulted from violations that occurred in prior years. While any violation is a serious issue, regulatory officials said that many of these issues have been addressed and that compliance among day trading firms is generally improving. For example, NASDR officials said that they are seeing far fewer misleading advertisements than in 1999. In August 2001, All-Tech Direct also lost an arbitration proceeding involving allegations of misleading advertising. Four traders filed arbitration proceedings against All-Tech Direct for losses incurred in their day trading accounts. Although firm officials said that the traders lost money when they held open positions overnight—a practice day trading firms usually do not recommend—the arbitration panel ruled in favor of the plaintiffs and awarded them a total of over $456,000. All-Tech Direct officials said they plan to appeal the ruling. As mentioned previously, All-Tech Direct has announced plans to sever its relationship with all of its retail branches. In October 2001, All-Tech Direct filed the necessary paperwork to withdraw its registration as a broker-dealer. In addition to the ongoing changes in day trading and in regulatory oversight of the activity, many day trading firms have responded to changing market conditions and regulatory scrutiny. According to some industry and regulatory officials, day trading firms are generally viewed as more knowledgeable and sophisticated in terms of regulatory compliance and management than they were in 1999. We found that most Web sites of day trading firms prominently highlighted the risks associated with day trading or provided easy-to-access risk disclosures or disclaimers. In addition, the sites focused on the speed of trade executions and lower fees rather than on profits. We interviewed officials from seven day trading firms and found that many of these firms no longer actively advertise for retail customers, relying instead on personal referrals. However, other day trading firms continue to advertise, and many allow customers to open an account online via their Web site. Day trading firms have adjusted the way they operate in response to changing market conditions and regulatory scrutiny. Firm management is generally viewed as more seasoned and sophisticated than it was in 1999. Industry officials said that in general most firms have matured and provide more vigorous oversight than in the past. In addition to the downturn in the securities markets, particularly in the technology sector, day traders and the firms in which they trade have had to adjust to certain market changes. The first of these was decimalization, which resulted in smaller spreads between bid and ask prices. Some industry officials said that the change has made it more difficult for day traders to make profits. As a result, these officials said that they have advised their traders to trade less frequently and in smaller lot sizes. The second change, the movement to SuperSoes and ultimately SuperMontage, is also expected to result in changes to how day traders operate. However, SuperMontage is not expected to be fully implemented until 2002. Given these ongoing changes in markets, SEC has not evaluated the growing use of ECNs by day traders on the integrity of the markets. Regulators and industry officials also said that firms now have more sophisticated monitoring systems, an area of concern identified by regulators in 1999. The firms we visited all had systems that allowed them to monitor the activity of each of their traders (retail and proprietary). In addition, many had set preestablished loss limits for traders. For example, one firm halted trading for customers who lost 30 percent of their equity in a single day. Further, some had systems that allowed them to prevent short sale violations by keeping traders from shorting ineligible stocks. These firms also had compliance departments that were responsible for monitoring the activities of the traders, and some provided regular reports to traders that detailed each trader’s daily activity and positions. Consistent with the findings of SEC and the SROs, we found that the Web sites of firms identified as offering day trading services provided prominent, easy-to-find risk disclosures or disclaimers about day trading. Specifically, 122 of 133 or about 92 percent of the Web sites we were able to access between July and November 2000 had risk disclosures or disclaimers. Many of the firms (and branches) used the NASDR risk disclosure statement or some similar variation. In addition, some provided links to SEC and NASDR Web sites for additional information about the risks of day trading. Rather than claims of easy profitability, many of the sites now focus on trade execution speed and low fees and commissions. Of the 125 firms accepting customers, some 57 firms and their branches allowed customers to file applications online, while 67 required that account applications be faxed or mailed. Some 40 offered training opportunities or links to other providers, and 20 had employment opportunities for traders. Since 1999, day trading has continued to evolve. In general, today’s day traders appear to be more experienced and knowledgeable about securities markets than many day traders in the late 1990s. Likewise, many day trading firms have begun to focus on institutional traders as well as retail customers, and more firms are likely to engage in proprietary trading. Finally, other market participants are seeking the direct access technology offered by day trading firms in order to be able to offer fully integrated services. Regulators have taken various actions in response to concerns raised about day trading. Implementation of disclosure rules and amendments to margin rules have directly or indirectly addressed many of the concerns raised by the Permanent Subcommittee. Moreover, SEC and the SROs have continued to scrutinize the activities of day trading firms since our 2000 report. We recommended that SEC conduct another sweep of day trading firms, given their growing portion of Nasdaq trading volume and the fact that day trading is an evolving part of the industry. SEC addressed this recommendation through follow-up examinations of the firms included in the previous day trading sweep and ongoing examinations of day trading firms. The SROs have performed and plan to continue to perform routine examinations of broker-dealers offering day trading as a strategy. Moreover, SEC plans to continue to conduct cause examinations as needed to maintain a certain degree of scrutiny of these firms’ activities. Given the recent move to decimals and ongoing changes in the securities markets, SEC has not yet formally evaluated day trading’s effect on markets but officials generally believe that many of the initial problems surrounding these firms have been addressed. Finally, the firms themselves have adjusted their behavior in response to market changes and regulatory scrutiny. The most noticeable changes appear in their advertising and Web site information, which in many cases now generally highlight the risks associated with day trading and the fact that day trading is not for everyone. Changes in market conditions appear to have driven many unsophisticated traders out of day trading, and increased disclosure about risks and continued regulatory oversight should help deter such traders from being lured into day trading by prospects of easy profits when market conditions improve. We requested comments on a draft of this report from the Chairman, SEC, and the President, NASDR. The Director, Office of Compliance Inspections and Examinations, SEC, and the President, NASDR, responded in writing and agreed with the report’s findings and conclusions. We also received technical comments and suggestions from SEC and NASDR that have been incorporated where appropriate. To determine how day traders and day trading firms’ operations have changed since 1999, we collected data from day trading firms, SEC, NASDR, and other relevant parties. To determine what types of changes have occurred in day trading, we reviewed available research on the subject and interviewed state and federal regulators, as well as several knowledgeable industry officials from seven of the larger day trading firms (including six of the seven we had interviewed previously). We compared these responses with the information we obtained in our 2000 report. Specifically, we obtained insights from regulatory and industry officials on overall changes in day trading and in the number of day traders. We discussed changes in the markets, such as decimalization, and how the move to decimals has impacted day traders. We also discussed common trends among day traders and day trading firms. In addition, we collected information on changes specific to individual firm operations. Finally, we also discussed the concerns raised and recommendations made by the Permanent Subcommittee and GAO in the respective 2000 reports. To identify the actions regulators have taken to address the Permanent Subcommittee’s concerns about day trading and our report recommendations, we met with officials from SEC and NASDR to discuss their actions involving day trading oversight. We also reviewed 104 examination reports that had been completed since 1999. We determined the frequency of the violations and the actions taken by SEC and NASDR in response to those violations. We spoke with a state regulatory official from Massachusetts and an official of the North American Securities Administrators Association about day trading and state regulatory oversight activities. Finally, we reviewed newly implemented or amended rules affecting day trading to determine whether they addressed the Permanent Subcommittee’s recommendations. To identify any actions taken by day trading firms in response to concerns raised about day trading, we interviewed officials from six of the seven day trading firms we identified in our 2000 report and from one additional firm about the initiatives the firms were taking pertaining to issues raised by the regulators and Congress. These issues included advertising, risk disclosure, margin issues, and determinations of appropriateness. We also discussed how the firms’ operations had changed over the previous 2 years. In addition, we reviewed the Web sites of over 200 firms that we identified as day trading firms (some were actually branches of other firms). We reviewed the sites and obtained information on the account opening process, training offers, proprietary trading opportunities, and risk disclosures, among other things. We conducted our work in Jersey City and Montvale, NJ; New York, NY; Austin and Houston, TX; and Washington, D.C., between April and November 2001 in accordance with generally accepted government auditing standards. As agreed with your offices, unless you publicly release its contents earlier, we plan no further distribution of this report until 30 days from its issuance date. At that time, we will send copies of this report to the Chairman and Ranking Minority Member of the Senate Committee on Banking, Housing and Urban Affairs; the Chairmen and Ranking Minority Members of the Senate Committee on Governmental Affairs and Permanent Subcommittee on Investigations; Chairmen of the House Committee on Financial Services and its Subcommittee on Capital Markets, Insurance and Government Sponsored Enterprises; Chairmen of the House Energy and Commerce Committee and its Subcommittees on Commerce, Trade and Consumer Protection and on Telecommunications and the Internet; and other congressional committees. We will also send copies to the Chairman of SEC, the Presidents of NASDR and NYSE. Copies will also be made available to others upon request. If you or your staff have any questions regarding this report, please contact Orice M. Williams or me at (202) 512-8678. Key contributors to this report were Toayoa Aldridge, Robert F. Pollard, and Sindy Udell. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full-text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO E-mail this list to you every afternoon, go to our home page and complete the easy-to-use electronic order form found under “To Order GAO Products.” Web site: www.gao.gov/fraudnet/fraudnet.htm, E-mail: [email protected], or 1-800-424-5454 (automated answering system). | Concerns arose in the late 1990s about day trading, particularly the use of questionable advertising to attract customers without fully disclosing or by downplaying the risks involved. Concerns were also raised that traders were losing large amounts of money. Day traders as a group and day trading firms have continued to evolve and are generally more experienced and sophisticated about securities markets and investing than was the case several years ago. Likewise, day trading firms' operations have evolved, and many have shifted their primary focus away from retail customers and toward attracting institutional customers, such as hedge funds and money market managers. Furthermore, more firms are likely to engage in proprietary trading activities through professional traders that trade the firms' own capital. Finally, although the number of day trading firms appears to have remained constant, several day trading firms have been acquired by other brokerages and market participants whose customers want the direct access to securities markets and market information that technology used by day trading firms provides. Since GAO's 2000 review, the Securities and Exchange Commission and the self-regulatory organizations have addressed many of the concerns raised about day trading. In addition to the ongoing changes in the industry and regulatory action, day trading firms have responded to changing market conditions and regulatory scrutiny by changing their behavior. The most noticeable changes appear in their advertising and website information, which now generally highlight the risks associated with day trading and the fact that day trading is not for everyone. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The three stamps issued thus far in the nation’s semipostal program have all been authorized through separate congressional acts pertaining solely to those stamps. The Stamp Out Breast Cancer Act required that the Service issue a Breast Cancer Research stamp, the nation’s first semipostal. Two additional semipostals—the Heroes of 2001 and Stop Family Violence stamps—were mandated by Congress in the 9/11 Heroes Stamp Act of 2001 and the Stamp Out Domestic Violence Act of 2001. Figure 2 shows the three semipostals. Following the authorization of these semipostals by Congress, a number of stakeholders became involved with the semipostals, including the Service, designated federal agencies, and advocacy groups. For example, after Congress mandated the semipostals, the Service issued the stamps and then transferred semipostal proceeds to the designated federal agencies, which then directed the funds toward the identified causes. Additionally, advocacy groups involved with the charitable causes have assisted in promoting the semipostals. Table 1 identifies the various stakeholders and summarizes their primary roles related to the semipostals. Authorized for 2 years in 1998, the Breast Cancer Research stamp has subsequently been reauthorized three times, and there are proposals in Congress to further extend the sales period through December 31, 2007. The Breast Cancer Research stamp raises money for breast cancer research programs at NIH and DOD, with the former receiving 70 percent of the funds raised and the latter receiving the remaining 30 percent. The Heroes of 2001 stamp was offered for sale from June 7, 2002, to December 31, 2004, and funds raised were transferred to FEMA to provide assistance to the families of emergency relief personnel who were killed or permanently disabled in the line of duty in connection with the terrorist attacks against the United States on September 11, 2001. The Service started selling the Stop Family Violence stamp on October 8, 2003, and it is scheduled to expire on December 31, 2006. Proceeds from the Stop Family Violence stamp are being transferred to HHS for domestic violence programs. For a period of just over 1 year, between October 8, 2003, and December 31, 2004, all three semipostals were on sale simultaneously. Figure 3 shows the authorized sales periods for each of the semipostals. Separately from the provisions that authorized the three semipostals, the Semipostal Authorization Act gave the Service the authority to issue semipostals that it considers to be appropriate and in the national public interest; however, the Service has not yet exercised this authority. Further, the Service has indicated that it does not plan to issue any semipostals under its own authority until sales of the Breast Cancer Research stamp and other congressionally authorized semipostals have concluded. However, legislative proposals to establish new semipostals continue to be made. In the 109th Congress, for example, a bill has been introduced to establish a new semipostal to benefit the Peace Corps. In February 2005, the House Committee on Government Reform, the oversight committee for the Service, adopted a rule that stated that the Committee will not give consideration to legislative proposals specifying the subject matter of new semipostals. That rule also stated that the Service should determine the subject matter of new semipostals. In September 2005, a bill was introduced to establish a semipostal to provide disaster relief for residents of Louisiana, Mississippi, and Alabama who were affected by Hurricane Katrina. The proceeds are to be transferred to the American Red Cross Disaster Relief Fund for Hurricane Katrina, which is not a government entity. This contrasts with the existing semipostals that transfer their proceeds to designated federal agencies. In our previous work, we reported that the Breast Cancer Research stamp has been an effective fund-raiser and that funds raised through sales of the stamp had contributed to key insights and approaches for the treatment of breast cancer. Most of the key stakeholders we spoke with and, according to our survey, members of the public viewed the stamp as an appropriate way of raising funds for a nonpostal purpose. We expressed some concerns, however, about the Service’s identification and recovery of costs associated with carrying out the act. We recommended that the Service reexamine and, as necessary, revise its Breast Cancer Research stamp cost- recovery regulations. We also suggested that Congress consider establishing annual reporting requirements for NIH and DOD. Semipostals have raised over $56 million to date, and sales were likely impacted by several factors. In addition to variations in the amounts raised by each of the semipostals, sales patterns were also different, and on the basis of our discussions with Service officials, advocacy groups, and other stakeholders, we identified four key factors that affected sales, including (1) fund-raising cause, (2) support of advocacy groups, (3) stamp design, and (4) Service promotional activities. The funds raised by the semipostals vary from $44 million for the Breast Cancer Research stamp to over $10.5 million for the Heroes of 2001 stamp and nearly $2 million for the Stop Family Violence stamp, totaling over $56 million. The length of time that each semipostal has been sold affected the amounts raised: the Breast Cancer Research stamp has been available for 7 years, the Heroes of 2001 stamp was available for just over 2½ years, and the Stop Family Violence stamp has been available for under 2 years. Semipostal sales patterns reveal marked differences. Breast Cancer Research stamp sales have fluctuated since the semipostal’s issuance in 1998 but have remained at a comparably high level over time (see fig. 4). The Heroes of 2001 and Stop Family Violence stamps each had initial sales surges—although at much different levels—with subsequent declines. Sales of the Breast Cancer Research stamp have averaged over 22 million semipostals per quarter since it was issued in 1998, with total sales of 606.8 million semipostals by May 31, 2005. Sales of the Heroes of 2001 stamp averaged over 13 million semipostals per quarter throughout its sales period and totaled 132.9 million, although over 50 percent of total sales occurred in the first two-quarters after issuance in 2002. Finally, as of May 31, 2005, sales of the Stop Family Violence stamp have averaged over 4 million semipostals per quarter and total 25.3 million since issuance. Public awareness about the fund-raising causes represented by the semipostals likely affected sales levels. The two semipostals addressing causes with high levels of public awareness—finding a cure for breast cancer and supporting the families of September 11 emergency personnel—had higher sales than the Stop Family Violence stamp, which raises funds for domestic violence programs, a cause that, while well known, has a lower profile. An official with the Komen Foundation pointed out that in the case of the Breast Cancer Research stamp, the fact that about one in eight women are affected by breast cancer keeps the subject in the public spotlight. Likewise, the national significance of the events surrounding the September 11 terrorist attacks ensured a high level of public awareness regarding the cause represented by the Heroes of 2001 stamp. In contrast, Service officials pointed to the lack of general coverage about domestic violence, which may have limited sales of the Stop Family Violence stamp. The appeal of the particular fund-raising cause was also a factor affecting semipostal sales. While the Breast Cancer Research and Heroes of 2001 stamps were associated with causes that generate a strong and supportive response, the Stop Family Violence stamp deals with a cause that may evoke a more complex response. Officials with the Association of Fundraising Professionals noted that certain causes generate a greater response than others, regardless of fund-raising methods. According to an official with the BBB Wise Giving Alliance, for example, four popular fund- raising causes currently are cancer, children’s issues, relief efforts, and animals, although the popularity of fund-raising causes fluctuates over time. Such an impact can be particularly acute for campaigns that use affinity fund-raising, whereby donors demonstrate their support for a specific cause with a public sign of their commitment. Fund-raising experts we spoke with at the Association of Fundraising Professionals stated that semipostals are examples of this kind of effort, and officials with the American Red Cross noted that other well-known examples of such marketing include the Lance Armstrong Foundation’s LiveStrong yellow bracelets and pink breast cancer awareness ribbons. Such branding can be problematic, however, for causes that, for a variety of reasons, may be more difficult to embrace. For example, officials with the National Coalition Against Domestic Violence and the Service mentioned that consumers may be reluctant to use the Stop Family Violence stamp given that the fund-raising cause is particularly sensitive. Service officials noted that some consumers pay close attention to the ways in which stamps can send intended or unintended messages about the sender or receiver of letters. The difference in appeal between fund-raising causes can also be seen in the degree to which they readily attract support or promotion by businesses or organizations. In the case of the semipostals, American Express and NASCAR approached the Service about partnership promotions for the Breast Cancer Research and Heroes of 2001 stamps, respectively. The partnerships resulted in promotion for the semipostals, done largely at the expense of the Service’s partners, who were able to affiliate themselves with these popular causes. American Express advertised the Breast Cancer Research stamp in print and inserts, while NASCAR placed an image of the Heroes of 2001 stamp prominently on a stock car at very little cost to the Service (see fig. 5). The Service did not receive any comparable offers in support of the Stop Family Violence stamp. While awareness and appeal may affect the size of the response, the length of the response may be related to another characteristic: whether the fund- raising cause is for an episodic event, such as a disaster, or for an ongoing concern, such as finding a cure for a disease. The Heroes of 2001 stamp sales reflected the dramatic emotional spike typically associated with episodic events, with fund-raising efforts building quickly and then declining as events begin to retreat from the public spotlight or become affected by subsequent developments, according to officials with the American Red Cross and the BBB Wise Giving Alliance. These organizations pointed to the fund-raising efforts generated by the December 2004 tsunami as an example of another episodic event, noting that the tsunami fund-raising surge lasted about 30 days. Officials with the Association of Fundraising Professionals told us that such fund-raising spikes are common for one-time events. More specifically, many September 11 fund-raising efforts experienced the same initial surge and the subsequent decline that the Heroes of 2001 stamp experienced, according to representatives with the New York City Police Foundation, the September 11th Families Association, and the National Association of Fallen Firefighters. By contrast, ongoing causes, such as finding a cure for breast cancer, are more likely to have staying power over time, according to fund-raising experts. Sales of the semipostals were likely affected by the capacity of advocacy groups working to promote them. Several of the breast cancer advocacy groups supporting the Breast Cancer Research stamp have large networks of members and have promoted the semipostal at events involving thousands of participants. For example, the Komen Foundation, an active supporter of the semipostal, has more than 80,000 individuals in an online advocacy group involved in lobbying to extend sale of the semipostal. The foundation also conducts “Race for the Cure” events around the world, with more than 1 million walkers or runners participating each year since 2000; and a partnership effort between the Komen Foundation and Yoplait (and its parent company General Mills) has contributed over $14 million to the breast cancer cause over 7 years. In contrast, family violence prevention groups tend to be smaller, according to officials with the Association of Fundraising Professionals. The National Resource Center on Domestic Violence noted that it has a mailing list of about 5,000 to which it has sent information about the Stop Family Violence stamp; and another group, the National Domestic Violence Hotline, provided information about the semipostal to over 100 local domestic violence programs. Further, an official with the National Coalition Against Domestic Violence described a cell phone donation program that earned about $2 million over 6 or 7 years. Finally, Service officials noted that there were no organized groups to coordinate with when the Heroes of 2001 stamp was developed. Beyond the capacity of advocacy groups, the specific efforts undertaken in support of the semipostals by such groups over time likely affected sales. Several breast cancer advocacy groups have actively supported the Breast Cancer Research stamp since its issuance, while comparatively less was done by advocacy groups to support the Heroes of 2001 or Stop Family Violence stamps, which may account for their declining sales trends. Service officials link semipostal sales to the support of advocacy groups. Several breast cancer advocacy groups that we spoke with mentioned carrying on activities to promote the Breast Cancer Research stamp. (Table 2 provides examples of these activities.) Likewise, Service officials stated that grassroots support given to the Breast Cancer Research stamp helps to explain its long-term success, pointing to the organized support of the semipostal by breast cancer advocacy groups and individuals, which has included use by doctors’ offices, sponsored walks and runs, and activities surrounding Breast Cancer Awareness Month. None of the advocacy groups affiliated with emergency personnel affected by the terrorist attacks of September 11 that we spoke with regarding the Heroes of 2001 stamp had engaged in promotional activities for the semipostal. The advocacy groups we spoke with were aware that the funds raised through sales of the semipostal were to be directed to September 11 emergency responders in some capacity, but they were unaware of the specifics of how the proceeds would be used. Like the Stop Family Violence stamp, sales of the Heroes of 2001 stamp did not have the boosts in sales seen periodically with the Breast Cancer Research stamp, although its initial sales were higher. The semipostal’s limited staying power may have reflected the lack of advocacy group activity on behalf of the semipostal. Several domestic and family violence advocacy groups mentioned that while they had intended to support the Stop Family Violence stamp with promotional activities, they have done less than originally planned. Confusion about how Stop Family Violence stamp proceeds would be used led some domestic and family violence advocacy groups to limit their promotional activities on behalf of the semipostal. As a result, although some local advocacy groups carried out promotional activities with local post offices, such as semipostal unveiling ceremonies, the national domestic or family violence groups that we spoke with typically limited their promotional activities to articles in newsletters or features on group Web sites. Some domestic and family violence advocacy groups acknowledged that they could have done more to promote the Stop Family Violence stamp and that the semipostal’s sales were likely adversely affected by this lack of promotion. The designs of both the Breast Cancer Research and Heroes of 2001 stamps were lauded by stakeholders; however, there was concern that the design of the Stop Family Violence stamp may have negatively affected sales of that semipostal. Both the Breast Cancer Research and Heroes of 2001 stamps had designs that were praised by stakeholders as having inspiring images that conveyed some information about where proceeds would be directed. Consumers could assume that funds would go to breast cancer research or September 11 emergency personnel in some capacity, according to officials with the American Red Cross. However, officials with the Association of Fundraising Professionals noted that the exact use of the funds was not clearly spelled out on either semipostal. Further, in-store messaging also provided limited information. (See fig. 6 for an example of an in-store counter card featuring the semipostals.) In contrast, although the design of the Stop Family Violence stamp won an international award, and the story behind the design was described as inspiring by some advocacy groups, advocates with such organizations as the Family Violence Prevention Fund and the National Network to End Domestic Violence questioned how likely postal customers would be to buy the stamp to use on their mail, given the image of a crying child. In addition, the semipostal’s design and information provided by the Service on in-store materials are less clear regarding how semipostal proceeds are to be used,referring to both domestic and family violence, which are viewed by some as separate causes. Both the Breast Cancer Research and Heroes of 2001 stamps had extensive Service advertising campaigns. The Service spent nearly $900,000 to advertise the Breast Cancer Research stamp and more than $1.1 million for the Heroes of 2001 stamp. This advertising included a billboard in Times Square for the Breast Cancer Research stamp and a national print advertising campaign for the Heroes of 2001 stamp. The Service also received the Gold “Reggie” award from the Promotion Marketing Association for the Service’s efforts in promoting the Breast Cancer Research stamp. As the result of an overall reduction in the Service’s budget, advertising for all stamps, including semipostals, has been limited to in-store messaging since 2003. As a consequence, Service officials determined that all funds spent to advertise semipostals would be deducted from the totals raised through their sales. This policy change had a marked impact on promotional activities for the Stop Family Violence stamp, which was issued in October 2003. While advertising costs associated with the Breast Cancer Research and Heroes of 2001 stamps had been paid by the Service, all advertising costs for the Stop Family Violence stamp were to be deducted from the stamp’s proceeds. In light of these limitations, the Service met with HHS before the Stop Family Violence stamp was issued. At this meeting the Service proposed spending $1.5 million or more on an advertising campaign that would be funded by the future semipostal proceeds. Because of uncertainty about how much money would be raised through sales of the Stop Family Violence stamp, HHS decided that the proposed advertising campaign not be pursued. In lieu of such a campaign, the Service and HHS looked to the advocacy groups to promote the semipostal. The Service and HHS officials met with advocacy group representatives and provided them with examples of the types of promotional activities that breast cancer advocacy groups had done to help publicize the Breast Cancer Research stamp and a poster for use in promotional activities. Through March 31, 2005, the Service spent about $77,000 to advertise the Stop Family Violence stamp, and this amount was recovered from semipostal proceeds. Table 3 provides examples of Service promotional efforts and partnerships in support of the semipostals. Service officials said that differences in sales among the three semipostals were not the result of the level of actions on the part of the Service. They said a semipostal’s success is dependent on the support provided by external groups or individuals. Service officials point out that for each semipostal, the Service issued a field and press kit and met with officials from the agencies receiving semipostal proceeds. In addition, the Service initiated kickoff events for each of the semipostals at the White House, with involvement from either the President or First Lady (see fig. 7). Finally, Service officials noted that local post offices are available to sponsor local events at the discretion of the postmaster. For example, the Service’s South Georgia District employees established the “Circle of Hope” campaign to promote and raise funds for the Breast Cancer Research stamp. In 2004, the campaign raised an estimated $21,000 in proceeds through stamp sales. Likewise, the Cardiss Collins Postal Facility in Chicago held a rededication ceremony for the Stop Family Violence stamp on August 2, 2005, in collaboration with the Illinois Secretary of State and officials from the Chicago Abused Women Coalition and the Chicago Police Department. The federal agencies receiving semipostal proceeds currently award or plan to award these funds using grants, and although each agency has collected and maintained information on semipostal proceeds, none has reported specifically on their use of proceeds thus far. NIH and DOD use Breast Cancer Research stamp proceeds to award research grants under existing programs. HHS has not distributed any proceeds from the Stop Family Violence stamp, but officials reported that they have established new grants within an existing program to award grants for domestic violence programs. While the other semipostals address ongoing causes, the Heroes of 2001 stamp raised funds for an episodic event without an established mechanism for distributing such funds. As a result, FEMA is establishing a new program and accompanying regulations for distributing Heroes of 2001 stamp proceeds to families of emergency relief personnel who were killed or permanently disabled in the line of duty in connection with the September 11 terrorist attacks. The laws authorizing these three specific semipostals do not include reporting requirements such as those of the Semipostal Authorization Act. Of the four agencies, FEMA and HHS have plans to report specifically as to the use of semipostal proceeds. Both NIH and DOD reported that they began receiving Breast Cancer Research stamp proceeds from the Service in November 1998, and breast cancer research grants have been awarded using established programs at both agencies since June 2000 and June 1999, respectively. NIH initially directed these proceeds to the National Cancer Institute (NCI) to award high-risk research grants through the “Insight Awards to Stamp Out Breast Cancer” initiative. This initiative was specifically designed for the Breast Cancer Research stamp proceeds, but exists within NCI’s grants program. One example of these grants includes funding research related to the development of a potential antitumor drug. In 2003, NIH approved new Breast Cancer Research stamp grants through the “Exceptional Opportunities in Breast Cancer Research” initiative, also administered by NCI, which uses semipostal proceeds to fund more traditional research. According to NIH officials, this change was made when it was determined that there were highly meritorious research applications outside the funding ability of NCI, and they noted that many outstanding grant applications would remain unfunded without the use of semipostal proceeds. Exceptional Opportunities awards have covered breast cancer research areas that include prevention, diagnosis, biology, and treatment. DOD uses Breast Cancer Research stamp proceeds to fund innovative approaches to breast cancer research through “Idea Award” grants under its existing Breast Cancer Research Program, which is administered by the U.S. Army Medical Research and Materiel Command. The scope of the grants has not changed since DOD began awarding them in 1999. Table 4 contains additional information about these initiatives and the size and number of grants awarded with Breast Cancer Research stamp proceeds. Since NIH and DOD both apply Breast Cancer Research stamp proceeds to established grant programs the agencies used existing procedures and regulations for awarding grants funded with the proceeds. For example, both agencies use existing review procedures to evaluate grant applications with input from advocacy groups. NIH and DOD officials stated that advocacy groups play an important role, and both agencies involve advocacy groups in their grants processes. Grants funded by NIH and DOD using Breast Cancer Research stamp proceeds have produced significant findings in breast cancer research. The first NIH Exceptional Opportunities Awards funded with Breast Cancer Research stamp proceeds were distributed in fiscal year 2003 and are awarded for a maximum of 4 years; therefore, it is still too early to report results from these awards. Both NIH and DOD use existing programs and processes such as monitoring grantees and requiring annual grantee reporting, which has made measuring grant performance and tracking grant outcomes relatively straightforward. Officials at each agency were pleased to gain new sources of funding and pleased that there have been some significant findings in the field of breast cancer research resulting from these awards. Table 5 provides select examples of research findings from NIH Insight Awards and DOD Idea Awards funded with Breast Cancer Research stamp proceeds. HHS began receiving Stop Family Violence stamp proceeds from the Service in May 2004, and, as of July 2005, HHS has not yet awarded any grants using semipostal proceeds. HHS is using an established grant program, the Family Violence and Prevention Services Program, to make the proceeds available at the end of fiscal year 2005 for grants aimed at enhancing services to children exposed to domestic violence. As of June 30, 2005, the Service had transferred about $1.8 million to HHS, and the agency has directed these proceeds to ACF, which is responsible for distributing the funds. In June 2005, ACF released an announcement for the grants, and ACF officials stated that they expect the first grants to be awarded during the end of fiscal year 2005. The purpose of the grants is to provide enhanced services and support to children who have been exposed to domestic violence in order to mitigate the impact of such exposure and increase the opportunities for these children to lead healthy lives as adults. Grant applicants are required to collaborate with a state’s domestic violence coalition and the state agency responsible for administering family violence programs. According to agency officials, it has always been ACF’s intention to use Stop Family Violence proceeds for enhanced services to children. Table 6 provides additional information about the ACF grants, to be awared including the size and number of awards. According to ACF officials, the agency used an established program to develop its grants to award Stop Family Violence stamp proceeds. The officials stated that ACF is using existing competitive review procedures to evaluate grant applications. These review procedures are described in the grant announcement, which was developed through ACF’s existing grant application process and made available on ACF’s Web site. ACF also plans to use its existing project grant reporting system to monitor grantee performance (see table 6). ACF consulted with domestic violence advocacy groups, state agencies, and state domestic violence coalitions on the current distribution of children’s services offered by domestic violence organizations and solicited their input on a fair and equitable method for grant participation. Although ACF involved advocacy groups in developing the way that semipostal funds could be used initially, many groups that we spoke with in the spring of 2005 expressed concern about how the Stop Family Violence stamp proceeds would be spent. Some national domestic violence groups reported that they were unaware of ACF’s intentions for semipostal proceeds because no semipostal grants have been announced and no funds had been spent. FEMA started receiving Heroes of 2001 stamp proceeds from the Service in November 2002, and FEMA has not yet distributed any of the semipostal proceeds. To determine the total amount of funds available, FEMA officials stated that the agency made a decision to wait until the Service had transferred all semipostal proceeds—in May 2005—before finalizing its grants program. Following the final transfer, FEMA had received over $10.5 million in semipostal proceeds. FEMA is establishing a program to make grants available to eligible emergency relief personnel who are permanently disabled or to the families of emergency relief personnel who were killed as a result of the terrorist attacks of September 11. According to FEMA officials, while distributing funds to disaster victims is within the scope of FEMA’s mission, distributing the semipostal proceeds is not within the scope of its disaster authority. As a result, FEMA has had to establish a new program with new regulations for semipostal proceeds, which includes establishing the mechanism through which the funds would be distributed. After undergoing regulatory review at the Office of Management and Budget (OMB), FEMA’s interim rule for their assistance program under the 9/11 Heroes Stamp Act of 2001 was made publicly available on July 26, 2005. The interim rule states that FEMA intends to distribute all Heroes of 2001 stamp proceeds equally among all eligible claimants. Table 7 provides additional information about the FEMA grants. When designing its program and regulations, FEMA officials stated that the agency considered the findings resulting from the Department of Justice September 11th Victim Compensation Fund of 2001, which provided over $7 billion in compensation to victims of the terrorist attacks. One of the observations detailed in the Final Report of the Special Master for the September 11th Victim Compensation Fund of 2001 is that there are serious problems posed by a statutory approach mandating individualized awards for each eligible claimant and that a better approach might be to provide the same amount for all eligible claimants. Prior to publicizing its interim rule, FEMA had informal discussions with stakeholder groups, and FEMA officials also stated that the program regulation would be available for public comment. New York City police, firefighter, and representatives of victims’ foundations whom we spoke with expressed some concern regarding FEMA’s use of the proceeds, because they were unaware if FEMA planned to allocate the Heroes of 2001 stamp proceeds through assistance programs or grants to individual families. These groups also noted that since the September 11 terrorist attacks, there has been an evolving set of needs that have little funding support, including long-term programs such as counseling and health care for emergency relief personnel involved in the September 11 recovery and clean-up efforts. None of the designated federal agencies receiving semipostal proceeds is required to issue a report to Congress detailing how these funds are used or any accomplishments resulting from semipostal-funded grants. The agencies would face such a reporting requirement if the three existing semipostals had been authorized under the Semipostal Authorization Act. Specifically, the act contains an accountability mechanism consisting of annual reports to include (1) the total amount of funding received by the agency, (2) an accounting of how proceeds were allocated or otherwise used, and (3) a description of any significant advances or accomplishments during the year that were funded—in whole or in part—with funds received. However, the laws that created the three semipostals did not specify any reporting requirements, and the agencies themselves have decided to take varying actions in this regard. NIH and DOD do not report specifically on the use of semipostal proceeds, though the agencies do collect information that, if necessary, could be assembled for such a report. To help manage their respective grant programs, NIH and DOD require award recipients to provide periodic reports on research progress and any breakthroughs achieved. Research findings from grants funded by Breast Cancer Research stamp proceeds can be found in some NIH publications, but the agency does not report specifically on its use of these funds. DOD provides limited information on its Idea Awards through annual reports on its Congressionally Directed Medical Research Programs. This reporting is limited to the number of Idea Awards and does not provide information on which awards are funded with Breast Cancer Research stamp proceeds. ACF plans to monitor grantee performance and to report on its use of semipostal proceeds through HHS’ grants system and will make an additional report available to Congress. Although FEMA initially indicated to us that the agency was not required to report on its use of semipostal proceeds, FEMA recently provided information to Congress—in part as a result of our work—on the total proceeds received from the sales of the Heroes of 2001 stamp. FEMA officials have indicated that once proceeds have been distributed, a report will be provided to Congress on the status of the 9/11 Heroes Stamp Act of 2001. According to FEMA officials, the report will summarize the agency’s Heroes of 2001 stamp program including information on its development, the process undertaken, and who is receiving the semipostal proceeds. Various fund-raising organizations that we spoke with indicated that program reporting is a useful accountability tool and may lead to greater fund-raising success. For example, the BBB Wise Giving Alliance, a charity watchdog group, recommends reporting requirements, in the form of annual reports, for charitable organizations to ensure that representations to the public are accurate, complete, and respectful. These reports should be made available to all, on request, and should include the organization’s mission, a summary of the past year’s accomplishments, and financial information. Further, officials with the American Red Cross stated that disclosure provides transparency, allowing consumers to determine if the cause is the best use of their money, and Association of Fundraising Professional officials noted that such reporting can even secure additional support by encouraging more people to contribute to the effort. While many of the agency officials, fund-raising groups, and charitable organizations that we contacted believe that the semipostals have been good fund-raisers, nearly all of them also believe that there were lessons learned. For the past several years, there have been multiple proposals introduced in Congress to establish new semipostals. For example in the 108th Congress, proposals had been introduced for semipostals promoting childhood literacy, the Peace Corps, and prevention of childhood drinking. Each of these proposals expired in committee, and—so far—the Peace Corps semipostal proposal has been reintroduced in the 109th Congress. Any lessons learned from the existing semipostals may be especially relevant for any future semipostals, whether congressionally mandated or issued under the Service’s authority. The lessons we identified from these three semipostals related primarily to five areas: use of funds raised, and agency reporting. The existing semipostals have been issued for a minimum 2-year sales period, and one—the Breast Cancer Research stamp—has been extended 3 times. The experience with the three existing semipostals indicates that the particular nature of the charitable causes may be important in how much money is raised, how long consumers continue to purchase the semipostal, and other results achieved. Among these differences are the following: One-time charitable causes, such as response to a major disaster, may provide a substantial immediate response but may also have limited staying power as ongoing fund-raisers. The Heroes of 2001 stamp was issued in 2002, while various national organizations were still raising funds for victims of the families of emergency relief personnel killed or disabled in the line of duty. Sales were highest for the initial two- quarters, followed by a dramatic drop. By contrast, the Breast Cancer Research stamp, which raises funds for an ongoing health issue, has had sales that have remained at a high level over its entire sales period. Considering a cause’s appeal in drawing affinity support is important in setting fund-raising expectations. Some charitable causes are simply less popular than others, and recognition of these differences can aid in forming assumptions about how much money will be raised through semipostal sales. For some consumers, applying a postage stamp serves as a symbol of loyalty to a particular charitable cause; therefore, it can be anticipated that the magnitude of a particular cause’s base of support will be reflected in semipostal sales. Association of Fundraising Professionals officials noted that certain causes generate a greater response than others, regardless of fund-raising methods. That is, breast cancer is a pervasive and ongoing concern; the September 11 terrorist attacks were a popular concern, but also an event likely to fade in intensity over time; and family violence, while an ongoing concern, is likely to engender less appeal. According to Association of Fundraising Professionals officials, the amounts raised by each semipostal are consistent with the popularity of the type of fund-raising cause represented on the stamps. In some cases, a growth in cause awareness may be a success that transcends the amount of money raised. In addition to raising funds, the semipostal program provides an avenue for increased exposure for particular charitable causes. While the amount of funds raised may not be as high for some causes, there are additional benefits of having a semipostal representing a particular cause visible and for sale in post offices throughout the country. Organizations and individuals whom we spoke with agreed that for all of the semipostals, heightened awareness of the cause was one benefit of having a semipostal. One Breast Cancer Research stamp supporter commented that the contribution that the semipostal has made to breast cancer awareness is priceless and more precious than the funds raised. Likewise an official from the National Fallen Firefighters Foundation stated that the Heroes of 2001 stamp has helped raise public awareness about the fire service. Support of advocacy groups is an important marketing device for semipostals. American Red Cross and BBB Wise Giving Alliance officials told us that advocacy groups are the most useful tool for getting the word out about charitable causes and fund-raising efforts, and Service officials agreed. Broad supportive networks of private organizations that are willing and capable of assisting in local and national marketing help sustain semipostal awareness and sales. Where it is not possible to do aggressive private-sector style marketing, as is the case with semipostals, advocacy groups can fill this gap. In the case of the Breast Cancer Research stamp, for example, the Service no longer has a budget to advertise stamps, which includes semipostals, but there are numerous advocacy groups that publicize the Breast Cancer Research stamp on their Web sites, at events they sponsor, and through letters to members and legislators. To sustain support from advocacy groups, the Service must cultivate this support, and the agency receiving the semipostal proceeds must sustain this support. Organizations involved with charitable causes told us that due to their multitude of priorities, if their input and support are not solicited and they are not kept informed about issues related to the relevant semipostal, including fund usage and program outcomes, group support for the semipostal will wane. For example, several advocacy groups associated with the domestic violence cause told us that immediately following launch of the Stop Family Violence stamp there was uncertainty as to how HHS was going to use the proceeds because the public announcement at the stamp’s kickoff event differed from the groups’ expectations. These advocacy groups told us that as a result of this confusion, they did not aggressively promote the semipostal. Semipostal design is one of the variables that can affect whether consumers are willing to signal their support for a cause. We received comments from numerous stakeholders, for example, that the design of the Stop Family Violence stamp, while certainly drawing attention, may not create a positive response—or affinity—because of its tone. A semipostal’s design can evoke emotion, and the emotional reaction to the image may be important in a consumer’s decision to purchase a semipostal and use it on a letter to make a statement. For example, the Heroes of 2001 stamp provided an image that was not only recognizable but inspiring. By contrast, the image on the Stop Family Violence stamp may create a more complex reaction, and result in a consumer’s decision not to buy the semipostal. The extent of promotion and advertising of a semipostal can also greatly affect sales. Fund-raising organizations that we spoke with agreed that in most cases, there is a connection between the amount invested in a fund- raising effort and the amounts raised. Although a direct correlation has not been determined, it should be noted that as a result of a Service budget reduction, which eliminated stamp advertising, the Stop Family Violence stamp did not benefit from a million-dollar promotional campaign as the two other semipostals did, and sales have remained lower in comparison for the stamp. Support may be further enhanced if the semipostal or the available marketing information clearly indicates how the proceeds will be used. Transparency is critical to fund-raising efforts, and semipostals are no exception. According to the BBB Wise Giving Alliance, one of the standards for charity accountability is to clearly disclose how the charity benefits from the sale of product or services. American Red Cross officials also emphasized that providing this information to consumers is critical to fund- raising efforts like semipostals. We found widespread confusion among advocacy groups about specifically how the Stop Family Violence stamp proceeds would be used. Officials added that the disclosure of where funding is to be directed is particularly important, given that consumers are increasingly savvy, and people have become increasingly skeptical about the distribution of charitable funds. The time lag between when funds are first raised and when they are distributed can be considerable, depending on the type of program that the agency implements for distributing semipostal proceeds. Semipostal sales generate revenues immediately upon going on sale at post offices, and semipostal revenues are distributed by the Service to designated agencies biannually, after the Service’s reasonable costs are deducted. However, it can then take an additional 2 years, or longer, for the funds to be used. For example, the Breast Cancer Research stamp, which was authorized in August 1997, was first sold in July of 1998, and the initial grants resulting from the proceeds were awarded by DOD in June of 1999 and by NIH in June of 2000 (nearly 1 and 2 years after issuance); the Heroes of 2001 stamp was first sold in June of 2002, and the proceeds raised have not yet been awarded by FEMA (3 years after the stamp was issued); and the Stop Family Violence stamp was first available in October of 2003, and no funds have yet been awarded by ACF (nearly 2 years after issuance). When semipostals are used as a fund-raising vehicle, the time lag is a consideration. Agencies awarding semipostal proceeds may need to consider this time lag in deciding how to apply the funds, particularly for episodic events that may involve a fund-raising surge and short-term or evolving needs. For example, program and funding priorities may change from the time that a semipostal is launched to the time proceeds are actually distributed. This time lag can result in consumer skepticism of or disagreement with the original program selection, resulting from changing or new funding priorities. For example, FEMA’s plan for distributing the Heroes of 2001 stamp proceeds has taken about 3 years to finalize, and while it is clear that the initial intent of the semipostal was to “provide financial assistance to the families of emergency relief personnel killed or permanently disabled in the terrorist attacks of September 11,” other organizations working with these families suggested that currently, the most prevalent needs of this group are programs and services directed at addressing the long-term effects of the terrorist attacks. The amounts raised by semipostals vary, and it is difficult to determine how much money will be raised by semipostal sales. For example, FEMA and ACF, which receive proceeds from the Heroes of 2001 and Stop Family Violence stamps respectively, reported to us that they delayed spending in these programs due to the uncertainty of how much money would be raised. ACF officials told us they initially expected the Stop Family Violence stamp to raise considerably more than it has. Once ACF officials realized that the amounts raised may not be sufficient to cover the planned programs, officials revisited their plans for the proceeds. Further, FEMA waited until all semipostal proceeds were received from the Service before pursuing its grant program. Due to the uncertainties surrounding how much money will be raised by semipostals, establishing a program that will be funded solely by semipostal proceeds may present challenges. In addition, attaching funds to already established mechanisms, such as existing grant guidelines or programs, may ease administration and allow for additional flexibility. For example, both the Breast Cancer Research and Stop Family Violence stamp proceeds are being used to distribute new grants within existing programs, which has allowed the agencies to make grants available using semipostal proceeds without developing and establishing the rules and regulations for new programs. Program reporting is an important standard for ensuring accountability. In general, we found that organizations we spoke with were unclear as to how semipostal proceeds were being used or would be used, and we found that none knew of any outcomes resulting from these funds. The Semipostal Authorization Act, which does not specifically apply to these three existing semipostals, requires that the agencies receiving funds under the act report to the congressional committees with jurisdiction over the Service about the semipostal funds received and used. Fund-raising organizations we spoke with, including the American Red Cross and the BBB Wise Giving Alliance, also recommend such reporting, pointing to the need to inform consumers about how proceeds have been used. Additionally, annual reporting may make information about program goals, plans, or funding mechanisms available to Congress, advocacy groups, and others earlier, thereby addressing some of the uncertainty that may arise between the initial issuance of the semipostal and the actual distribution of funds. Currently, none of the agencies administering the three semipostals are providing this degree of disclosure for semipostal programs. Agency reporting for these semipostals is either subsumed in reports about the larger programs to which the proceeds are applied or has not yet been produced. However, these agencies do collect and track this information and could report it with little difficulty. We found widespread agreement among most parties involved that the Breast Cancer Research, Heroes of 2001, and Stop Family Violence stamps were a success. Success can be measured in terms of funds raised, but also in less tangible ways, such as increased public awareness of an important issue. If the definition of semipostals success is narrowed specifically to the funds raised, however, the differences among these three make it all the more important to pay attention to the lessons learned, which can help in setting expectations for further semipostal sales. Given that new semipostals have been proposed in Congress and the Service is authorized to issue additional semipostals, the potential is always there for new semipostals, and therefore the lessons learned may be helpful in any future considerations. One of these lessons—the need for accountability—involves actions that can still be taken on these semipostals, rather than just applied to future semipostals. Through the Semipostal Act and its related regulations, Congress and the Service have taken measures to develop criteria for the selection of semipostal issues, identification of recipient agencies, and reporting of program operations, but these criteria have thus far been largely bypassed due to the provisions that have authorized these three semipostals. These three semipostals lie outside the Semipostal Authorization Act, and may benefit from applying the reporting requirement. Additionally, if any future semipostals are authorized by Congress separately from this act, this type of requirement could be included as part of the legislation in order to ensure greater accountability and greater support for the semipostals. To enhance accountability for semipostal proceeds, we recommend that the Secretary of Defense, Secretary of Homeland Security, and Secretary of Health and Human Services annually issue reports to the congressional committees with jurisdiction over the Service, as is currently required for agencies that are to receive semipostal proceeds under the Semipostal Authorization Act. Reports should include information on the amount of funding received, accounting for how the funds were allocated or otherwise used, and any significant advances or accomplishments that were funded, in whole or in part, out of the funds received through the semipostal program. We requested comments on a draft of this report from the Service, ACF, DOD, FEMA, HHS, and NIH. The Service and DOD provided written comments, which are summarized below and reprinted in appendix VI and VII, respectively. ACF, FEMA, HHS, and NIH did not provide comments on this report. The Service stated in its comments on the draft report that it generally agreed with the four key factors that we cited as affecting stamp sales. The Service agreed that the fund-raising cause and support of advocacy groups were key factors in the stamps’ success. However, the Service suggested that stamp design and its promotion of the stamps seem to be of less importance to a semipostal stamp’s success as a fund-raiser. The Service said that its experience indicates that a semipostal’s design plays little role in its effectiveness as a fund-raiser. We based our conclusion, that stamp design affects the extent to which consumers support the semipostal, on our discussions with advocacy groups and fund-raising experts who expressed concern that the design of the Stop Family Violence stamp—an image of a crying child—may have negatively affected the sales of that semipostal. Therefore, we continue to believe that the design was a factor in the stamp’s sales. Regarding promotional activities for specific semipostals, the Service correctly noted that its current policy requires that promotional costs be deducted from the funds raised, which can lead to the federal agencies receiving less semipostal proceeds. We acknowledge that HHS chose not to have the Service develop an extensive advertising campaign after the Service changed its policy on semipostal promotional costs, and our finding is not meant as a criticism of the Service. Nevertheless, the striking differences in results leads us to conclude that the Service’s promotional efforts can make a difference: the Service spent about $1 million to promote the Breast Cancer Research stamp, which raised $44 million in 7 years; it spent about $1 million to promote the Heroes of 2001 stamp, which raised over $10.5 million in 2.5 years; and it spent about $77,000 to promote the Stop Family Violence stamp, which has raised nearly $2 million in 1.6 years. Our conclusion was reinforced by the fund-raising experts that we spoke with who agreed that in most cases there is a connection between the amount invested in a fund-raising effort and the amounts raised. DOD concurred with our recommendation to improve reporting of how semipostal proceeds are used. DOD explained that the Army will include in its annual report to Congress on “Congressionally Directed Medical Research Programs” a section on DOD’s use of Breast Cancer Research stamp proceeds. It noted that this report will highlight significant advances or accomplishments that were funded, in whole or in part, through these proceeds. We are sending copies of this report to Senators Dianne Feinstein and Kay Bailey Hutchison and Representative Joe Baca because of their interest in the Breast Cancer Research stamp; Senators Hillary Rodham Clinton and Charles E. Schumer because of their interest in the Heroes of 2001 stamp; the Postmaster General; the Chairman of the Postal Rate Commission; and other interested parties. We will make copies available to others upon request. This report will also be available on our Web site at no charge at http://www.gao.gov. If you have any question about this report, please contact me at (202) 512-2834 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report included Gerald P. Barnes, Assistant Director; Kathleen Gilhooly; Molly Laster; Heather MacLeod; Joshua Margraf; Stan Stenersen; and Gregory Wilmoth. To determine the amount of money raised by the semipostals, we analyzed semipostal sales data provided to us by the U.S. Postal Service (Service). For each semipostal, these data included the amount of quarterly stamp sales and the amount of proceeds transferred to the designated federal agencies. The data also included administrative costs deducted by the Service from the total sales amounts, which we have reported in appendix II. To determine the reliability of the data we received, we obtained and reviewed specific information on the Service’s data collection methods, including data storage and system controls. We determined that the data were sufficiently reliable for the purpose of this report. To identify potential factors affecting the patterns of fund-raising sales for each of the semipostals, we asked stakeholders for their opinions regarding such factors and identified common trends. As part of this effort, we spoke with Service officials; the American Philatelic Society; professional fund- raising organizations; and national advocacy groups affiliated with breast cancer, emergency relief personnel affected by the terrorist attacks of September 11, and domestic violence. We also spoke with Dr. Ernie Bodai, who is credited with conceiving the idea for the Breast Cancer Research stamp, and Ms. Betsy Mullen, who along with Dr. Bodai lobbied Congress for the stamp. Additionally, we gathered information about Service and advocacy group efforts to promote each of the semipostals. Table 8 identifies the stakeholders whom we spoke with. To determine how the designated federal agencies have used semipostal proceeds and reported results, we interviewed key officials from each agency receiving funds. These agencies included the National Cancer Institute (NCI) within the National Institutes of Health (NIH), the Army Medical Research and Materiel Command within the Department of Defense (DOD), the Federal Emergency Management Agency within the Department of Homeland Security, and the Administration for Children and Families within the Department of Health and Human Services. We also obtained and reviewed available agency documentation about grant programs funded with semipostal proceeds, including grant program development, purpose and goals, award and program guidelines, the number and amounts of awards, reporting requirements, performance measures, and grant outcomes. We did not assess each agency’s semipostal grant program as this was not included in the scope of our work, nor did we evaluate grant performance measures that might be included in agency reporting. Finally, to describe the monetary and other resources expended by the Service in operating and administering the semipostal program, we obtained and analyzed the Service’s data on costs of administering semipostals as well as what costs the Service has recovered. We also interviewed officials in the Service’s Offices of Stamp Services and Finance to determine what progress the Service has made in revising its regulations. We spoke with officials from the Service’s Legal Counsel to determine whether the Service has established baseline costs for the semipostal program as per our prior recommendation. The Service has incurred over $16.5 million on operating and administering the Breast Cancer Research, Heroes of 2001, and Stop Family Violence stamps. Of this amount, the Service has recovered about $1.8 million from semipostal proceeds, with the remainder recovered through the First-Class postage rate. The Service’s costs related to the Breast Cancer Research stamp have by far eclipsed costs of the other two semipostals, reflecting the amount of time that the stamp has been offered for sale and other factors. In our previous work, we expressed concern over the Service’s cost recovery regulations. Since our 2003 report, the Service has taken several steps to revise its cost recovery regulations, and has established baseline costs to identify and recover the Service’s reasonable costs related to the semipostals. According to Service policy, cost items recoverable from the funds raised by semipostals include, but are not limited to, packaging costs in excess of those for comparable stamps, printing costs for flyers or special receipts, costs of changes to equipment, costs of developing and executing marketing and promotional plans in excess of those for comparable stamps, and other costs that would not normally have been incurred for comparable stamps. Specifically, the Service has identified 13 cost categories that it uses to track semipostal costs. These categories include the following: stamp production and printing; withdrawing stamps from sale; destroying unsold stamps; printing flyers and special receipts; developing and executing marketing and promotional plans; and other costs (legal, market research, and consulting). Costs reported by the Service totaled $16.5 million through March 31, 2005 (see table 9). Costs for the Breast Cancer Research stamp accounted for more than $11 million of this amount. The Service determined that about $1.8 million of the total costs related to the three stamps represented costs that were attributable specifically to the semipostals and would not normally have been incurred for comparable stamps, and therefore needed to be recovered. The recovered amounts ranged from over $1 million for the Breast Cancer Research stamp, to just over $200,000 for the Stop Family Violence stamp. The Service reported that the majority of costs incurred by the semipostals were covered by the First-Class postage rate, and not recovered from the proceeds. Table 9 describes the semipostal costs incurred and recovered by the Service. The specific costs recovered from surcharge revenues varied by semipostal not only in amount, but to a degree, in the type of expenditure as well (see tables 10 to 12, which show costs for each semipostal). For example, when the Breast Cancer Research and Heroes of 2001 stamps were issued, the Service had a budget to advertise stamps. Both semipostals incurred advertising costs of about $1 million, and because advertising costs would be incurred for comparable stamps, the Service did not recover those costs. When the Stop Family Violence stamp was issued, the Service reduced its overall budget and eliminated, among other things, all stamp advertising, including that for semipostals. Subsequently, the Service established a policy that all costs incurred for advertising semipostals would be deducted from the applicable semipostal’s surcharge revenue. Therefore, the advertising costs incurred ($77,000) for this semipostal were recovered from the surcharge revenue. While policies changed for some cost categories, they remained consistent for others such as design and production and printing. In our September 2003 report on the Breast Cancer Research stamp, we recommended that the Service reexamine and, as necessary revise its cost- recovery regulations to ensure that the Service establishes baseline costs for comparable stamps and uses these baselines to identify and recover costs from the Breast Cancer Research stamp’s surcharge revenue. The Service has taken several steps to revise its regulations including the following: 1. The final rule in 39 C.F.R. §551.8, in effect since February 5, 2004, clarifies Service cost offset policies and procedures for the semipostal program. Specific changes include expanding the types of “comparable stamps” that could be used in conducting cost comparisons to allow other types of stamps (such as definitive or special issue stamps) to serve as a baseline for cost comparisons; allowing for the use of different comparable stamps for specific cost clarifying that costs that do not need to be tracked include not only costs that are too burdensome to track, but also those costs that are too burdensome to estimate; and clarifying that several types of costs could be recovered when they materially exceed the costs of comparable stamps. 2. The Service also amended the regulation 39 C.F.R. §551.8(e) effective February 9, 2005, to delete the word “may” from the cost items recoverable from the surcharge revenue, making the recovery of the costs listed mandatory rather than optional. Additionally, we have recommended that the Service establish and publish baseline costs to provide assurance that the Service is recovering all reasonable costs of the Breast Cancer Research stamp from the surcharge revenue. In response, on June 25, 2004, the Service provided a copy of its baseline analysis to both Congress and GAO in a report entitled United States Postal Service: Response to the General Accounting Office Recommendations on the Breast Cancer Research Stamp. In this analysis, the Office of Stamp Services and Office of Accounting identified comparable stamps and created a profile of the typical costs characteristics, thereby establishing a baseline for Breast Cancer Research stamp cost recovery. Additionally, Service officials reported that they would use the baseline for the other semipostals. Congress has selected the subject matter for the three semipostals issued to date. In each case, the Service has then applied the same design process used for regular commemorative stamps. According to Service officials, most subjects that appear on commemorative stamps are the result of suggestions sent in by the public, which number about 50,000 annually. In the case of commemorative stamps, the Postmaster General determines what stamps will be produced with the assistance of the Citizens’ Stamp Advisory Committee (CSAC), which works on behalf of the Postmaster General to evaluate the merits of all stamp proposals and selects artwork that best represents the subject matter. Since the three existing semipostals were mandated by Congress, the Service and CSAC were not involved in selecting the subject matter. However, the rest of the stamp design process was the same, with CSAC determining what design would be used, and the Postmaster General giving final approval. Figure 8 shows the three semipostals. The Breast Cancer Research stamp was designed by Ethel Kessler of Bethesda, MD, and features the phrases "Fund the Fight" and "Find a Cure." Whitney Sherman of Baltimore provided the illustration of Diana, mythical goddess of the hunt, who is reaching behind her head to pull an arrow from her quiver to fend off an enemy—in this case, breast cancer. This image reflects the same position that a woman assumes for a breast self examination and mammography. The various colors represent the diversity of Americans affected by breast cancer. The Heroes of 2001 stamp was designed by Derry Noyes of Washington, D.C., and features a detail of a photograph by Thomas E. Franklin. The photograph shows three firefighters, each of whom participated in the September 11 rescue efforts, raising the U.S. flag in the ruins of the World Trade Center at Ground Zero in New York. The flag had been discovered in a boat near the area and was raised on a pole found in the rubble. The space between the foreground and background of the picture, which was about 100 yards, helps convey the enormity of the debris and the task at hand. According to the photographer, the raising of the flag symbolizes the strength of the firefighters and of the American people battling the unimaginable. All three firefighters and the photographer attended the stamp’s unveiling ceremony, which marked the 6-month anniversary of the September 11 terrorist attacks. Stop Family Violence Stamp When art director Carl T. Herrman selected Monique Blais, a six-year-old from Santa Barbara, CA, to model for a photograph that was to be the original design of the Stop Family Violence stamp, his intention was to photograph Blais erasing a domestic violence image from a chalkboard— symbolizing eradication of the issue. During a break in the photo session, however, and without prompting, Blais began drawing her own picture of what she thought best represented domestic violence. Photographed by Philip Channing, Blais’s drawing became the basis for the final Stop Family Violence stamp design, which was later selected by a jury at the 34th Asiago International Prize for Philatelic Art, in Asiago, Italy as the most beautiful social awareness-themed stamp issued during 2003. The young artist attended the stamp’s unveiling ceremony at the White House in 2003. As of April 2005, NIH had awarded 106 breast cancer research grants totaling about $16.1 million using proceeds from the Breast Cancer Research stamp. Individual awards ranged from $47,250 to $616,010 and averaged about $151,652. Funds received from sales of the Breast Cancer Research stamp were initially used to fund breast cancer research under NCI’s “Insight Awards to Stamp Out Breast Cancer” initiative, according to NIH officials. In 2003, NCI’s Executive Committee decided to direct the funds to a newly approved Breast Cancer Research stamp initiative entitled “Exceptional Opportunities in Breast Cancer Research.” Grants awarded under each program are listed below. The Insight Awards were designed to fund high-risk exploration by scientists who are employed outside the federal government and who conduct breast cancer research at their institutions. NCI distributed 86 Insight Awards at a total of about $9.5 million. Most of the awards were for 2-year periods. Individual awards ranged from $47,250 to $142,500 and averaged $111,242, discounting a one-time supplement of $4,300. Table 13 provides information about each Insight Award funded with Breast Cancer Research stamp proceeds, including the fiscal year of the award, sponsoring institution, principal investigator, research area, and the amount of the award. The Exceptional Opportunities were designed to advance breast cancer research by funding high-quality, peer-reviewed, breast cancer grant applications that are outside the current funding ability of NCI. When NIH began awarding these grants, the number of annual awards decreased from about 29 per year to 10, while the average amount tripled. In all, NCI dispersed Breast Cancer Research stamp proceeds to 20 Exceptional Opportunities awards, each funded for a maximum of 4 years. The awards totaled about $6.6 million and covered research areas that included prevention, diagnosis, biology, and treatment. Individual awards ranged from $81,000 to $616,010 and averaged $330,763. Table 14 provides information about each Exceptional Opportunities Award, including the fiscal year of the award, sponsoring institution, principal investigator, research area, and the amount of the award. As of April 2005, DOD had awarded 27 breast cancer research grants totaling about $11 million using proceeds from the Breast Cancer Research stamp. Individual awards ranged from $5,000 to $767,171 and averaged $400,405. DOD applies Breast Cancer Research stamp proceeds to its Breast Cancer Research Program in order to fund Idea Awards, which are grants that focus on innovative approaches to breast cancer research and cover research areas such as genetics, biology, imaging, epidemiology, immunology, and therapy. According to DOD officials, about $500,000 of the transferred funds had been used for overhead costs. Table 15 provides information about each Idea Award funded with Breast Cancer Research stamp proceeds, including the fiscal year of the award, sponsoring institution, principal investigator, research area, and the amount of the award. | Congress has directed the U.S. Postal Service to issue three fund-raising stamps, also called semipostals, since 1998. These stamps are sold at a higher price than First-Class stamps, with the difference going to federal agencies for specific causes. The proceeds from the three stamps address breast cancer research, assistance to families of emergency personnel killed or permanently disabled in the terrorist attacks of September 11, and domestic violence. The law authorizing the Breast Cancer Research stamp directed GAO to report on the fund-raising results. To provide additional information to the Congress, GAO expanded the study to include all three semipostals. GAO's study addressed (1) the amounts raised and the factors affecting sales, (2) how the designated agencies used the proceeds and reported the results, and (3) lessons learned for the Postal Service, agencies receiving the proceeds, and others. Over $56 million has been raised through semipostal sales as of June 2005, and sales were likely affected by several key factors. Individually, proceeds totaled $44 million for the Breast Cancer Research stamp, over $10.5 million for the Heroes of 2001 stamp, and nearly $2 million for the Stop Family Violence stamp. Sales patterns and levels differed greatly, with four key factors affecting sales patterns: (1) fund-raising cause, (2) support of advocacy groups, (3) stamp design, and (4) promotion by the Postal Service. The designated federal agencies currently award or plan to award grants with the proceeds; none of the agencies has reported specifically on results. Breast Cancer Research stamp proceeds have been used to award research grants by the National Institutes of Health and the Department of Defense. No grants have yet been awarded with the proceeds from the two other semipostals. The Federal Emergency Management Agency plans to distribute Heroes of 2001 stamp proceeds through grants to families of emergency personnel killed or permanently disabled from the September 11 attacks, while the Department of Health and Human Services plans to use Stop Family Violence stamp proceeds for grants to organizations for projects aimed at enhancing services to children exposed to domestic violence. Key lessons that have emerged from the three semipostals: (1) the nature of the charitable cause can greatly affect sales patterns and other results. A disaster, for example, is more likely to have a brief but intense response, while an ongoing health issue will have a longer one; (2) early and continued involvement of advocacy groups helps sustain semipostal support; (3) stamp design, promotion, and clear understanding about how proceeds will be used can greatly affect consumers' response; (4) semipostals generate proceeds immediately, but the logistics of using the moneys raised takes much longer, and (5) reporting can enhance accountability. Congress included a reporting requirement in the Semipostal Authorization Act of 2000, but these three semipostals are not subject to that requirement. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
For the past several years, concerns about the cost of operating and maintaining federal recreation sites within the federal land management agencies have led the Congress to provide a significant new source of funds. This additional source of funding—the Recreational Fee Demonstration Program—was authorized in 1996. The fee demonstration program authorized the Bureau of Land Management, Fish and Wildlife Service, National Park Service, and the Forest Service to experiment with new ways to administer existing fee revenues and to establish new recreation entrance and user fees. The current authorization for the program expires December 31, 2005. Previously, all sites collecting entrance and user fees deposited the revenue into a special U.S. Treasury account to be used for certain purposes, including resource protection and maintenance activities, and funds in this account only became available through congressional appropriations. The fee demonstration program currently allows agencies to maintain fee revenues in special U.S. Treasury accounts for use without further appropriation: 80 percent of the fees are maintained in an account for use at the site and the remaining 20 percent are maintained in another account for use on an agency-wide basis. As a result, these revenues have yielded substantial benefits for local recreation sites by funding significant on-the-ground improvements. From the inception of the Recreational Fee Demonstration Program, the four participating agencies have collected over $1 billion in recreation fees from the public. The Department of the Interior and the Department of Agriculture’s most recent budget requests indicate that the agencies expect to collect $138 million and $46 million, respectively, from the fee demonstration program in fiscal year 2005. H.R. 3283, as proposed, would provide a permanent source of revenue for federal land management agencies to use to, among other things, help address the backlog in repair and maintenance of federal facilities and infrastructure. One of the principal uses of the revenues generated under the existing Recreational Fee Demonstration Program is for participating agencies to reduce their respective maintenance backlogs. The Department of the Interior owns, builds, purchases, and contracts services for such assets as visitor centers, roads, bridges, dams, and reservoirs, many of which are deteriorating and in need of repair or maintenance. We have identified Interior’s land management agencies inability to reduce their maintenance backlogs as a major management challenge. According to the Department of the Interior’s latest estimates, the deferred maintenance backlog for its participating agencies ranged from about $5.1 billion to $8.3 billion. Table 1 shows the Department’s estimate of deferred maintenance for its agencies participating in the Recreational Fee Demonstration Program. Of the current participating agencies within Interior, the National Park Service has the largest estimated maintenance backlog—ranging from $4 to nearly $7 billion. As we have previously reported, the Park Service’s problems with maintaining its facilities have steadily worsened in part because the agency lacks accurate data on the facilities that need to be maintained or on their condition. As a result, the Park Service cannot effectively determine its maintenance needs, the amount of funding needed to address them, or what progress, if any, it has made in closing the maintenance gap. Although the Park Service has used some of the revenues generated from the fee demonstration program to address its high-priority maintenance needs, without accurate and reliable data, it cannot demonstrate the effect of fee demonstration revenues in improving the maintenance of its facilities. The Park Service has acknowledged the problems associated with not having an accurate and reliable estimate of its maintenance needs and promised to develop an asset management process that, when operable, should provide a systematic method for documenting deferred maintenance needs and tracking progress in reducing the amount of deferred maintenance. Furthermore, the new process should enable the agency to develop (1) a reliable inventory of its assets, (2) a process for reporting on the condition of each asset, and (3) a system-wide methodology for estimating its deferred maintenance costs. In 2002, we identified some areas that the agency needed to address in order to improve the performance of the process, including the need to develop cost and schedules for completing the implementation of the process, better coordinating the tracking of the process among Park Service headquarters units to avoid duplication of effort within the agency, and better definition of its approach to determine the condition of its assets and how much the assessments will cost. In our last testimony on this issue before this Subcommittee in September 2003, we stated that the complete implementation of the new process would not occur until fiscal year 2006, but that the agency had completed, or nearly completed, a number of substantial and important steps to improve the process. The two other Interior agencies participating in the program—the Fish and Wildlife Service and the Bureau of Land Management also report deferred maintenance backlogs of about $1 billion and $330,000, respectively. We do not have any information at this time on the effectiveness of the program in reducing these backlogs. The Forest Service also has an estimated $8 billion maintenance backlog most of which is needed to maintain forest roads and bridges. In September 2003, we reported that the Forest Service (like the Park Service) had no effective means for measuring how much of the fee demonstration revenues it had spent on deferred maintenance or the impact that the fee program had had on reducing its deferred maintenance needs. Although the Forest Service has recognized the significance of its deferred maintenance problem, it does not have a systematic method for compiling the information needed to provide a reliable estimate of its deferred maintenance needs. Furthermore, the agency has not developed a process to track deferred maintenance expenditures from fee demonstration revenues. As a result, even if the agency knew how much fee revenue it spent on deferred maintenance, it could not determine the extent to which these revenues had reduced its overall deferred maintenance needs. Forest Service officials provided several reasons why the agency had not developed a process to track deferred maintenance expenditures from the demonstration revenues. First, they said that the agency chose to use its fee demonstration revenue to improve and enhance on-site visitor services rather than to develop and implement a system for tracking deferred maintenance spending. Second, the agency was not required to measure the impact of fee revenues on deferred maintenance. Finally, because the fee demonstration program was temporary, agency officials had concerns about developing a process for tracking deferred maintenance, not knowing if the program would subsequently be made permanent. H.R. 3283 would provide participating agencies with a permanent source of funds to supplement existing appropriations and to better address maintenance backlogs. Furthermore, by making the program permanent, H.R. 3283 could provide participating agencies like the Forest Service with an incentive to develop a system to track their deferred maintenance backlogs. The existing fee demonstration program requires federal land management agencies to maintain at least 80 percent of the fee revenues for use on-site. In a 1998 report, we suggested that, in order to provide greater opportunities to address high priority needs of the agencies, the Congress consider modifying the current requirement to grant agencies greater flexibility in using fee revenues. H.R. 3283 provides the agencies with flexibility to reduce the percentage of revenues spent on-site down to 60 percent. We also reported that the requirement that at least 80 percent of the revenues be maintained for use at the collection site may inadvertently create funding imbalances between sites and that some heavily visited sites may reach a point where they have more revenues than they need for their projects, while other sites would still fall short. In 1999, we testified that some demonstration sites were generating so much revenue as to raise questions about their long-term ability to spend these revenues on high-priority items. In contrast, we warned that sites outside the demonstration program, as well as demonstration sites that did not collect as much in fee revenues, may have high-priority needs that remained unmet. As a result, some of the agencies’ highest-priority needs might not be addressed. Our testimony indicated that, at many sites in the demonstration program, the increased fee revenues amounted to 20 percent or more of the sites’ annual operating budgets, allowing such sites to address past unmet needs in maintenance, resource protection, and visitor services. While these sites could address their needs within a few years, the 80-percent requirement could, over time, preclude the agencies from redistributing fee revenues to meet more pressing needs at other sites. Our November 2001 report confirmed that such imbalances had begun to occur. Officials from the land management agencies acknowledged that some heavily visited sites with large fee revenues may eventually collect more revenue than they need to address their priorities, while other lower-revenue generating sites may have limited or no fee revenues to meet their needs. To address this imbalance, we suggested that the Congress consider modifying the current requirement that 80 percent of fee revenue be maintained for use by the sites generating the revenues to allow for greater flexibility in using fee revenues. H.R. 3283 would still generally require agencies to maintain at least 80 percent of fee revenues for use on-site. However, if the Secretary of the Interior determined that the revenues collected at a site exceeded the reasonable needs of the unit for which expenditures may be made for that fiscal year, under H.R. 3283 the Secretary could then reduce the percentage of on-site expenditures to 60 percent and transfer the remainder to meet other priority needs across the agency. The need for flexibility in transferring revenue must also be balanced against the necessity of keeping sufficient funds on-site to maintain incentives at fee-collecting units and to maintain the support of the visitors. Such a balance is of particular concern to the Forest Service, which has identified that visitors generally support the program so long as the fees are used on-site and they can see improvements to the site where they pay fees. Accordingly, under the existing fee demonstration program, the Forest Service has committed to retaining 90 to 100 percent of the fees on-site. As such, H.R. 3283 would not likely change the Forest Service’s use of collected fees. However, it would provide the Forest Service, as well as the other agencies, with the flexibility to balance the need to provide incentives at fee collecting sites and support of visitors against transferring revenues to other sites. The legislative history of the fee demonstration program places an emphasis on participating agency collaboration to minimize or eliminate confusion for visitors where multiple fees could be charged to visit recreation sites in the same area. Our prior work has pointed to the need for more effective coordination and cooperation among the agencies to better serve visitors by making the payment of fees more convenient and equitable while at the same time, reducing visitor confusion about similar or multiple fees being charged at nearby or adjacent federal recreation sites. For example, sites do not consistently accept agency and interagency passes, resulting in visitor confusion and, in some cases, overlapping or duplicative fees for the same or similar activities. H.R. 3283 would allow for improved service to visitors by coordinating federal agency fee-collection activities. First, the act would standardize the types of fees that the federal land management agencies use. Second, it would create a single national pass that would provide visitors access to recreation sites managed by different agencies. Third, it would allow for the coordination of fees on a regional level for access to multiple nearby sites. In November 2001, we reported that agencies had not pursued opportunities to coordinate their fees better among their own sites, with other agencies, or with other nearby, nonfederal recreational sites. As a result, visitors often had to pay fees that were sometimes overlapping, duplicative, or confusing. Limited fee coordination by the four agencies has permitted confusing fee situations to persist. At some sites, an entrance fee may be charged for one activity whereas a user fee may be charged for essentially the same activity at a nearby site. For example, visitors who entered either Olympic National Park or the Olympic National Forest in Washington state for day hiking are engaged in the same recreational activity—obtaining general access to federal lands—but were charged distinct entrance and user fees. For a 1-day hike in Olympic National Park, users paid a $10 per-vehicle entry fee (good for 1 week), whereas hikers using trailheads in Olympic National Forest were charged a daily user fee of $5 per vehicle for trailhead parking. Also, holders of the interagency Golden Eagle Passport—a $65 nationwide pass that provides access to all federal recreation sites that charge entrance fees—could use the pass to enter Olympic National Park, but had to pay the Forest Service’s trailhead parking fee because the fee for the pass covers only entrance fees and not a user fees. However, the two agencies now allow holders of the Golden Eagle Passport to use it for trailhead parking at Olympic National Forest. Similarly, confusing and inconsistent fee situations also occur at similar types of sites within the same agency. For example, visitors to some Park Service national historic sites, such as the San Juan National Historic Site in Puerto Rico, pay a user fee and have access to all amenities at the sites, such as historic buildings. However, other Park Service historic sites, such as the Roosevelt/Vanderbilt Complex in New York State, charge no user fees, but tours of the primary residences require the payment of entrance fees. Visitors in possession of an annual pass that cover entrance fees, such as the National Parks Pass, may be further confused that their annual entrance pass is sufficient for admission to a user fee site, such as the San Juan National Historic Site, but not sufficient to allow them to enter certain buildings on the Roosevelt/Vanderbilt Complex, which charge entrance fees. H.R. 3283 would streamline the recreational fee program by providing a standard fee structure across federal land management agencies using a 3- tiered fee structure: a basic recreation fee, an expanded recreation fee, and a special recreation permit fee. H.R. 3283 establishes several areas where a basic recreation fee may be charged. For example, the basic recreation fee offers access to, among other areas, National Park System units, National Conservation Areas, and National Recreation Areas. Expanded recreation fees are charged either in addition to the basic recreation fee or by itself when the visitor uses additional facilities or services, such as a developed campground or an equipment rental. A special recreation permit is charged when the visitor participates in an activity such as a commercial tour, competitive event, or an outfitting or guiding activity. In November 2001 we reported another example of an interagency issue that needed to be addressed—the inconsistency and confusion surrounding the acceptance and use of the $65 Golden Eagle Passport. The annual pass provides visitors with unlimited access to federal recreation sites that charge an entrance fee. However, many sites do not charge entrance fees to gain access to a site and instead charge a user fee. For example, Yellowstone National Park, Acadia National Park, and the Eisenhower National Historic Site charge entrance fees. But sites like Wind Cave National Park charge user fees for general access. If user fees are charged in lieu of entrance fees, the Golden Eagle Passport is generally not accepted even though, to the visitor with a Golden Eagle Passport, there is no practical difference. Further exacerbating the public’s confusion over payment of use or entrance fees was the implementation of the Park Service’s single-agency National Parks Pass in April 2000. This $50 pass admits the holder, spouse, children, and parents to all National Park Service sites that charge an entrance fee for a full year. However, the Parks Pass does not admit the cardholder to the Park Service sites that charge a user fee, nor is it accepted for admittance to other sites in the Forest Service and in the Department of the Interior, including the Fish and Wildlife Service sites. H.R. 3283 would eliminate the current national passes and replace them with one federal lands pass—called the “America the Beautiful—the National Parks and Federal Recreation Lands Pass”—for use at any site of a federal land management agency that charges a basic recreation fee. The act also calls for the Secretaries of Agriculture and the Interior to jointly establish the National Parks and Federal Recreation Lands Pass and to jointly issue guidelines on the administration of the pass. In addition, it requires that the Secretaries develop guidelines for establishing or changing fees and that these guidelines, among other things, would require federal land management agencies to coordinate with each other to the extent practicable when establishing or changing fees. H.R. 3283 would also provide local site managers the opportunity to coordinate and develop regional passes to reduce visitor confusion over access to adjacent sites managed by different agencies. When authorizing the demonstration program, the Congress called upon the agencies to coordinate multiple or overlapping fees. We reported in 1999 that the agencies were not taking advantage of this flexibility. For example, the Park Service and the Fish and Wildlife Service manage sites that share a common border on the same island in Maryland and Virginia—Assateague Island National Seashore and Chincoteague National Wildlife Refuge. When the agencies selected the two sites for the demonstration program, they decided to charge separate entrance fees. However, as we reported in 2001, the managers at these sites developed a reciprocal fee arrangement whereby each site accepted the fee paid at the other site to better accommodate the visitors. Resolving situations in which inconsistent and overlapping fees are charged for similar recreation activities would offer visitors a rational and consistent fee program. We stated that further coordination among the agencies participating in the fee demonstration program could reduce the confusion for visitors. We reported that demonstration sites may be reluctant to coordinate on fees partly because the program’s incentives are geared towards increasing their revenues. Because joint fee arrangements may potentially reduce revenues to specific sites, there may be a disincentive among these sites to coordinate. Nonetheless, we believe that the increase in service to the public might be worth a small reduction in revenues. Accordingly, we recommended that the Secretaries of Agriculture and the Interior direct the heads of the participating agencies to improve their service to visitors by better coordinating their fee collection activities under the Recreational Fee Demonstration Program. In response, in 2002, the Departments of the Interior and Agriculture formed the Interagency Recreational Fee Leadership Council to facilitate coordination and consistency among the agencies on recreation fee policies. We also recommended that the agencies approach such an analysis systematically, first by identifying other federal recreation areas close to each other and then, for each situation, determining whether a coordinated approach, such as a reciprocal fee arrangement, would better serve the visiting public. The agencies implemented this recommendation to a limited extent as evidenced by the reciprocal fee arrangement between Assateague Island National Seashore and Chincoteague National Wildlife Refuge. H.R. 3283 offers federal agencies the opportunity to develop regional passes to offer access to sites managed by different federal, state and local agencies. As we have reported in the past, for all four agencies to make improvements in interagency communication, coordination, and consistency for the program to become user-friendly, an effective mechanism is needed to ensure that interagency coordination occurs or to resolve interagency issues or disputes when they arise. Essentially, the fee demonstration program raises revenue for the participating sites to use for maintaining and improving the quality of visitor services and protecting the resources at federal recreation sites. The program has been successful in raising a significant amount of revenue. However, the agencies could enhance the quality of visitor services more by providing better overall management of the program. Several of the provisions in H.R. 3283 address many of the quality of service issues we have identified through our prior work and if the provisions are properly implemented these services should improve. While the fee demonstration program provides funds to increase the quality of the visitor experience and enhance the protection of resources by, among other things, addressing a backlog of needs for repair and maintenance, and to manage and protect resources, the program’s short and long-term success lies in the flexibility it provides agencies to spend revenues and the removal of any undesirable inequities that occur to ensure that the agencies’ highest priority needs are met. However, any changes to the program’s requirements should be balanced in such a way that fee-collecting sites would continue to have an incentive to collect fees and visitors who pay them will continue to support the program. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or Members of the Subcommittee may have. For further information about this testimony, please contact me at (202) 512-3841. Doreen Feldman, Roy Judy, Jonathan McMurray, Patrick Sigl, Paul Staley, Amy Webbink, and Arvin Wu made key contributions to this statement. The following is a listing of related GAO products on recreation fees, deferred maintenance, and other related issues. Recreation Fees: Information on Forest Service Management of Revenue from the Fee Demonstration Program. GAO-03-1161T. Washington, D.C.: September 17, 2003. Recreation Fees: Information on Forest Service Management of Revenue from the Fee Demonstration Program. GAO-03-470. Washington, D.C.: April 25, 2003. Recreation Fees: Management Improvements Can Help the Demonstration Program Enhance Visitor Services. GAO-02-10. Washington, D.C.: November 26, 2001. Recreational Fee Demonstration Program Survey. GAO-02-88SP. Washington, D.C.: November 1, 2001. National Park Service: Recreational Fee Demonstration Program Spending Priorities. GAO/RCED-00-37R. Washington, D.C.: November 18, 1999. Recreation Fees: Demonstration Has Increased Revenues, but Impact on Park Service Backlog Is Uncertain. GAO/T-RCED-99-101. Washington, D.C.: March 3, 1999. Recreation Fees: Demonstration Program Successful in Raising Revenues but Could Be Improved. GAO/T-RCED-99-77. Washington, D.C.: February 4, 1999. Recreation Fees: Demonstration Fee Program Successful in Raising Revenues but Could Be Improved. GAO/RCED-99-7. Washington, D.C.: November 20, 1998. National Park Service: Efforts Underway to Address Its Maintenance Backlog. GAO-03-1177T. Washington, D.C.: September 27, 2003. National Park Service: Status of Agency Efforts to Address Its Maintenance Backlog. GAO-03-992T. Washington, D.C.: July 8, 2003. National Park Service: Status of Efforts to Develop Better Deferred Maintenance Data. GAO-02-568R. Washington, D.C.: April 12, 2002. National Park Service: Efforts to Identify and Manage the Maintenance Backlog. GAO/RCED-98-143. Washington, D.C.: May 14, 1998. National Park Service: Maintenance Backlog Issues. GAO/T-RCED-98-61. Washington, D.C.: February 4, 1998. Deferred Maintenance Reporting: Challenges to Implementation. GAO/AIMD-98-42. Washington, D.C.: January 30, 1998. Major Management Challenges and Program Risks, Department of the Interior. GAO-03-104. Washington, D.C.: January 2003. Major Management Challenges and Program Risks, Department of the Interior. GAO-01-249. Washington, D.C.: January 2001. Park Service: Managing for Results Could Strengthen Accountability. GAO/RCED-97-125. Washington, D.C.: April 10, 1997. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | In 1996, the Congress authorized an experimental initiative called the Recreational Fee Demonstration Program that provides funds to increase the quality of visitor experience and enhance resource protection. Under the program, the Bureau of Land Management, Fish and Wildlife Service, and National Park Service--all within the Department of the Interior--and the Forest Service--within the U.S. Department of Agriculture--are authorized to establish, charge, collect, and use fees at a number of sites to, among other things, address a backlog of repair and maintenance needs. Also, sites may retain and use the fees they collect. The Congress is now considering, through H.R. 3283, whether to make the program permanent. Central to the debate is how effectively the agencies are using the revenues that they have collected. This testimony focuses on the potential effect of H.R. 3283 on the issues GAO raised previously in its work on the Recreational Fee Demonstration Program. Specifically, it examines the extent to which H.R. 3283 would affect (1) federal agencies' deferred maintenance programs, (2) the management and distribution of the revenue collected, and (3) interagency coordination on fee collection and use. H.R. 3283 would provide agencies with a permanent source of funds to better address their maintenance backlog, and by making the program permanent, the act would provide agencies incentive to develop a system to track their deferred maintenance backlogs. According to the Department of the Interior's latest estimates, the deferred maintenance backlog for the Interior agencies participating in the fee demonstration program ranges from $5.1 billion to $8.3 billion, with the Park Service alone accounting for an estimated $4 to $7 billion. Likewise, the Forest Service, the other participating agency, estimates its total deferred maintenance backlog to be about $8 billion. GAO's prior work on the Park Service's and Forest Service's backlog has demonstrated that neither agency has accurate and reliable information on its deferred maintenance needs and cannot determine how much of the fee demonstration revenues it spends on reducing its deferred maintenance needs. Furthermore, some agency officials have hesitated to divert resources to develop a process for tracking deferred maintenance because the fee demonstration program is temporary. H.R. 3283 would allow agencies to reduce the percentage of fee revenue used on-site down to 60 percent, thus providing the agencies with greater flexibility in how they use the revenues. Currently, the demonstration program requires federal land management agencies to maintain at least 80 percent of the collected fee revenues for use on-site. This requirement has helped some demonstration sites generate revenue in excess of their high-priority needs, but the high-priority needs at other sites, which did not collect as much in fee revenues, remained unmet. GAO has suggested that the Congress consider modifying the current 80-percent on-site spending requirement to provide agencies greater flexibility in using fee revenues. H.R. 3283 would standardize the types of fees federal land management agencies may use and creates a single national pass that provides visitors general access to a variety of recreation sites managed by different agencies and allows for the regional coordination of fees to access multiple nearby sites. GAO's prior reports have demonstrated the need for more effective coordination and cooperation among the agencies to better serve visitors by making the payment of fees more convenient and equitable while reducing visitor confusion about similar or multiple fees being charged at nearby or adjacent federal recreation sites. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Congress enacted the Coastal Zone Management Act in 1972 to balance the often competing demands for economic growth and development with the need to protect coastal resources. To accomplish the goals of the act, Congress established a framework for a voluntary federal and state coastal management partnership, the CZMP. The CZMP represents a unique federal-state partnership for protecting, restoring, and responsibly developing the nation’s coastal communities and resources, according to program documents. The act identifies specific goals for state programs that fall into six broad focus areas ranging from protecting and restoring coastal habitat to assisting with coastal community development efforts and improving government coordination and decision making (see table 1). States must submit comprehensive descriptions of their coastal management programs—which must be approved by the states’ governors—to NOAA for review and approval. As specified in the act, states must meet the following requirements to receive NOAA’s approval for their state programs, among others: designate coastal zone boundaries that will be subject to state define what constitutes permissible land and water uses in coastal propose an organizational structure for implementing the state program, including the responsibilities of and relationships among local, state, regional, and interstate agencies; and demonstrate sufficient legal authorities to carry out the objectives and policies of the state program, including the means by which a state will regulate land and water uses, control development, and resolve conflicts among competing activities in coastal zones to ensure their wise use. The act provides states the flexibility to design programs that best address states’ unique coastal challenges, laws, and regulations, and participating states have taken various approaches to developing and carrying out their programs. For instance, there are generally two organizational structures used by states to implement their programs: (1) networked programs, which rely on multiple state and local agencies to implement their programs, and (2) non-networked, or comprehensive state programs that administer all aspects of the program through a single centralized agency. The coastal management activities carried out also vary across states with some states focusing on permitting, mitigation, and enforcement activities, while other states focus on providing technical and financial assistance to local governments and nonprofits for local coastal protection and management projects. If states make changes to their programs, such as changes in their coastal zone boundaries or organizational structures, the states must submit those changes to NOAA for review and approval. The act includes two primary incentives to encourage states to develop coastal management programs and participate in the CZMP. First, participating states are eligible to receive federal funding from NOAA to support the implementation and management of their programs, which the agency receives annually through congressional appropriations. In fiscal year 2013, NOAA awarded participating states a total of approximately $61.3 million, a 9 percent decline from fiscal year 2008 awards, when it awarded just over $67.5 million across participating states. NOAA awards CZMP funding to individual states across three fund types—administrative, enhancement, and coastal nonpoint program—according to requirements in the act (see table 2). The majority of funding NOAA awards through the CZMP is administrative funding. Administrative funding, which requires state matching funds, supports general implementation of the state’s coastal management program. Under the act, NOAA may also award a maximum of $10 million annually in enhancement program funding to participating states. Enhancement funding is to be used by states to develop program changes, or enhancements, to their NOAA-approved programs in one or more of nine enhancement objectives specified in the act, as listed in table 2. In addition, Congress has generally provided direction on the total amount of funds to be awarded through the coastal nonpoint program to assist with states’ coastal nonpoint pollution control programs, which are programs to ensure states have necessary tools and enforceable authorities to prevent and control polluted runoff in coastal areas. According to NOAA officials, funding has not been provided for this program since fiscal year 2009, when nearly $3.4 million was awarded to states. States may also use other sources of funding for their coastal nonpoint pollution control programs, including administrative and enhancement funding. Second, federal agency activities in or affecting the uses or resources of a participating state’s defined coastal zone are required to be consistent to the maximum extent practicable with enforceable policies of the state’s program. Under this provision, known as federal consistency, states with approved programs must have the opportunity to review proposed federal actions for consistency with enforceable policies of their state programs. Types of federal actions that may be reviewed by states include federal agency activities, such as improvements made to a military base; licenses or permits to nonfederal applicants; financial assistance to state and local governments; and outer continental shelf activities, such as oil and gas development. If a state finds that a federal activity is not consistent with the state’s enforceable policies, the state can object to the activity and work with the federal agency to resolve any differences between the proposed activity and state policies. All participating state programs have developed federal consistency review processes. Thirty-four out of 35 eligible states have federally approved coastal management programs (see fig. 1). Most state programs have been in existence for more than 30 years, with the earliest program approved in 1976, and 29 states having received federal approval for their programs by 1986. The most recent state to begin participating in the program is Illinois, which received federal approval in January 2012. NOAA’s Office of Ocean and Coastal Resource Management (OCRM) is responsible for general administration and oversight of the CZMP. NOAA plans to merge the OCRM with its Coastal Services Center—an office that provides coastal-related mapping tools and data; training on various coastal management issues such as climate adaptation and coastal restoration design and evaluation; and technical and other assistance to local, state, and regional coastal organizations—into a single office by the end of 2014. Under the current and planned office structure, NOAA officials are responsible for approving state programs and any program changes, administering federal funding to the states, providing technical assistance to states such as on the development of 5-year assessment and strategy reports that identify states’ priority needs and projects to address one or more of nine enhancement objectives required for enhancement funding, among other topics, and managing the CZMP performance measurement system. NOAA assigns coastal management specialists to work with individual state programs. As part of its administration of the program, NOAA evaluates program performance using its CZMP performance measurement system. NOAA began developing a framework for this performance measurement system in 2001, started piloting it in 2004, and fully implemented the system by 2008. The system consists of 15 performance measures that generally correspond with the goals of the act, and two additional measures to track state financial expenditures. The 17 total performance measures incorporate individual data elements, plus additional subcategories of information that state programs collect and report into the system annually (see app. II). In addition, NOAA evaluators, who are in a different NOAA division than specialists, are responsible for conducting individual state program evaluations, which are required under the act. State program evaluations are designed to examine the extent to which states have: (1) implemented their approved programs,(2) addressed coastal management needs identified in the act, and (3) adhered to the terms of CZMP funds awarded through cooperative agreements. NOAA’s state program evaluation reports identify state accomplishments and make recommendations for improving states’ programs. NOAA’s recommendations are classified as either necessary actions—actions a state must take by a specific date such as the next regularly scheduled evaluation—or program suggestions—actions it believes a state should take to improve its program. NOAA may withdraw approval for a state’s program and financial assistance in cases where states do not address necessary actions. NOAA has not withdrawn approval for a state program as of the end of fiscal year 2013 and, according to NOAA officials, few necessary actions have been identified in past state evaluations. In 2008, we examined NOAA’s process for awarding financial assistance to states and how the agency evaluated the effectiveness of the CZMP. Of the seven recommendations we made in 2008, NOAA disagreed with one recommendation that the agency develop performance measures to evaluate the effectiveness of state programs in improving processes; NOAA agreed with the other six recommendations and has taken some actions to address them as described in table 3. During fiscal years 2008 through 2013, the 34 participating states allocated a total of nearly $400 million in CZMP funds for a variety of activities, generally related to the broad goals for state programs outlined in the Coastal Zone Management Act. Each year, NOAA analyzes its cooperative agreements with states for CZMP funding, and categorizes the states’ CZMP funding allocations as they correspond with the six focus areas based on the broad goals in the act, along with a seventh category to capture state program administrative costs, such as general program operations, supplies, and rent. According to NOAA’s analysis, during fiscal years 2008 through 2013, states’ allocations of CZMP funds varied across the seven categories, with about half concentrated in support of activities related to two focus areas, government coordination and coastal habitat (see fig. 2). NOAA officials told us that, while states have the flexibility to design and implement programs that best meet their unique needs, the agency does influence how states allocate CZMP funds through (1) NOAA’s review and approval of states’ 5-year assessment and strategy reports required for enhancement funding in which participating states prioritize projects that support program improvements and (2) NOAA’s periodic state program evaluations in which NOAA outlines necessary actions or makes program suggestions that can influence state program activities. NOAA officials said that they also informally shape or influence state program activities through ongoing discussions with state program officials about funding proposals or specific projects, such as how projects might be adjusted to address NOAA priorities. Examples of activities for which participating states allocated CZMP funds during fiscal years 2008 through 2013 in each of the six focus areas include the following: Government coordination. States allocated CZMP funds for activities including state and regional planning efforts that involve coordination among multiple levels of government and stakeholders to address complex and controversial coastal issues, such as comprehensive planning of ocean and nearshore areas, energy facility siting or special area management planning; federal consistency activities; technical assistance to local governments; and public outreach and education on coastal issues including website development and publications about a state program’s activities. According to NOAA’s analysis of cooperative agreements with states for CZMP funding, states allocated the largest amount of CZMP funding during the 6-year period—about 27 percent of total funding— to government coordination activities. We found that a number of state programs use CZMP funds to support participation in regional organizations involving ocean planning activities that entail coordination across federal, state, and local governments. For example, state program officials in some Northeast and Mid-Atlantic states participate in regional organizations, such as the Northeast Regional Ocean Council and Mid-Atlantic Regional Council on the Ocean, that have ocean resource data collection and planning efforts under way. We also found that most states we reviewed provide some type of technical or financial assistance to local governments to support local level coastal management activities and projects. Protecting Coastal Habitat in Texas The Texas state program used coastal zone funds to support a multiyear marsh restoration project on the Texas Gulf Coast near Corpus Christi. Over the past 60 years, about 340 acres of coastal marsh habitat were lost due to the construction of an adjacent highway and subsequent erosion. A local nonprofit organization began restoring the marsh in 2005. The project involved scooping sand, clay, and shells from the bay bottom and piling the material into terraces and mounds; planting native grasses on the terraces to stabilize the structures and provide habitat; and constructing an outer rock berm to protect the new marsh area from strong waves in the bay, as shown below. Project officials told us Texas’s state program provided about $1 million in coastal zone funding, about 20 percent of the project’s total cost, to the nonprofit organization responsible for the project. Other funding to carry out the project was provided by the EPA, U.S. Fish and Wildlife Service, state government sources, and grants from private foundations. According to project officials, the project was completed in spring 2014 and has resulted in 160 acres of restored marsh that provide habitat for fish, crabs, shrimp, nesting birds, sea grass, and other plants and animals. The project also resulted in the creation of new opportunities for public recreation, such as fishing and kayaking, and the marsh protects the adjacent highway from coastal hazards, such as storms, according to project officials. Coastal habitat. States allocated CZMP funds for coastal habitat protection and restoration activities including the acquisition or placement of easements on coastal lands; restoration of coastal habitats; data collection and mapping of coastal habitats; development of plans for habitat acquisition, restoration, and other habitat management needs; implementation of permitting and enforcement programs that protect coastal habitat through planning and regulation of development; or support of land management programs such as those for coastal preserves and parks. States also allocated CZMP funds for public outreach and education activities that focused on coastal habitat protection and restoration. According to NOAA’s analysis, approximately 24 percent of CZMP funds awarded during fiscal years 2008 through 2013 were allocated to coastal habitat protection and restoration activities. According to NOAA’s CZMP performance measurement system data from 2008 through 2013, states reported that they used CZMP funds to protect nearly 23,300 acres of coastal habitat through acquisition or easement, restore nearly 37,400 acres of coastal habitat, and through regulatory programs protect more than 123,000 net acres of coastal habitat. Coastal hazards. States allocated CZMP funds for activities that help coastal communities minimize risks from coastal hazards, such as storms, tsunamis, and sea-level rise, and improve hazard awareness and understanding. Such activities include assessment and planning efforts, such as developing mitigation plans, risk and vulnerability assessments, and data collection and mapping to identify and manage development in areas vulnerable to coastal hazards; implementation of hazard mitigation projects; implementation and enforcement of hazard policies, regulations, and requirements; and education and training on coastal hazard topics. According to NOAA’s analysis of cooperative agreements with states for CZMP funding, about 13 percent of CZMP funds awarded in fiscal years 2008 through 2013 were allocated for coastal hazards projects. The coastal hazards focus area was the one focus area where the share of CZMP funds allocated steadily increased over the 6-year period, from roughly 7 percent in fiscal year 2008 to about 16 percent in fiscal year 2013. Most state program officials we spoke with identified their work to help communities reduce future damage from hazardous events and impacts from sea-level rise related to climate change as among their more significant projects. NOAA also identified coastal hazards work as a priority area, and in 2011, through the agency’s funding guidance, began encouraging states to use CZMP funding for projects that improve the resiliency of coastal communities to adapt to the impacts of coastal hazards and climate change. In addition, many of the projects that were awarded funding under the competitive Projects of Special Merit Program in fiscal years 2012 and 2013 were identified by states as addressing, at least in part, coastal hazards, according to NOAA officials. For example, South Carolina’s project to study tidal inlet dynamics and erosion and Maine’s adaptation planning project for its coastal parks both addressed coastal hazard issues. NOAA’s CZMP performance measurement system data for 2008 through 2013 show that states reported working with more than 410 communities to reduce risks from coastal hazards and nearly 230 communities to improve public awareness of coastal hazards issues. A Coastal Water Quality Monitoring and Modeling Project in Florida Estuaries—such as Sarasota Bay, that spans about 56 miles along the southwest Florida coast—are important productive ecosystems that provide habitat for a diversity of species. Nonpoint source pollution carried through runoff influences the health of the Sarasota Bay, which has limited tidal flushing, no major tributary, and receives most of its freshwater from rainfall and associated runoff. Florida’s coastal management program provided nearly $150,000 in coastal zone funds to support a multiyear water quality monitoring and modeling study in Sarasota Bay led by the Florida Fish and Wildlife Research Institute. The study was designed to help determine major factors affecting the ecological health of the bay. Specifically, coastal zone funding was used for statistical modeling to differentiate between the effects of polluted runoff into the bay during storm events from the effects of natural algal, or other natural sources of nutrients, in the bay. Florida state program officials told us that understanding ecological responses in estuaries can facilitate planning to minimize potential impacts and help maintain overall ecosystem health. Continued water quality monitoring and modeling is being completed in the bay with other funding sources, according to Florida officials. Coastal water quality. States allocated CZMP funds for water quality permitting and enforcement activities such as permitting of storm water discharges; activities and projects related to water quality management including vegetative plantings or other nonstructural shoreline erosion control projects; water quality monitoring; activities and projects for local governments to improve water quality management; technical assistance, data collection, mapping, planning, and policy development to address water quality issues; marine debris and other coastal cleanup or pollution prevention programs; and projects and activities that provide technical assistance to marinas to reduce nonpoint source pollution; and public outreach and education on water quality issues. Activities include those that support states in implementing their coastal nonpoint source pollution control programs. According to NOAA’s CZMP performance measurement system data, from 2008 through 2013, states reported that they worked with more than 680 communities to develop nonpoint source pollution management policies and plans, or complete related projects, and removed 27 million pounds of marine debris through coastal cleanup activities. Coastal community development. States allocated CZMP funds for activities including planning and construction to support the redevelopment of urban waterfronts, ports, and harbors; technical assistance to local governments related to waterfront redevelopment; community planning, land-use planning, green infrastructure planning, and other sustainable development efforts; and public outreach and education activities specific to coastal community development issues. According to CZMP performance measurement system data from 2008 through 2013, states reported that they worked with more than 580 coastal communities to promote development and growth in ways that protect coastal resources and with more than 250 communities to redevelop ports and waterfronts. Public access. States allocated CZMP funds for activities including creating new public access sites through easements or right of ways; enhancing existing public access through trails, handicap features, or educational signage; developing plans, collecting data, and providing technical assistance to local governments on public access planning; and conducting public outreach and education activities on public access issues. According to NOAA’s analysis, states allocated the least amount of CZMP funding (about 6 percent of total CZMP funding) for activities that improve public access to the coast. Unlike other focus areas, a number of states did not allocate funds for public access. According to NOAA officials, some states may not need to use CZMP funding to support public access projects, for example, because they already have sufficient public access to coastal areas. In total, according to CZMP performance measurement system data from 2008 through 2013, states reported that with CZMP funds and through regulatory programs they helped create nearly 700 new public coastal access sites and helped enhance nearly 1,500 existing sites. State program officials told us that CZMP funding is important because it can help leverage other financial resources and provides sustained, multiyear funding for projects. We found that CZMP-funded projects and activities often involved partnerships with various entities and used multiple sources of funding. According to state program officials, CZMP funds were often the catalyst for obtaining additional financial assistance or other resources. For example, we visited a $5.2 million, multiyear marsh restoration project along the Texas Gulf coast that received nearly 20 percent of overall project funding through the CZMP and additional financial support from eight other federal, state, and private sources. Representatives from the nonprofit organization responsible for managing the project told us that CZMP funds received during the initial stages helped attract other funding partners needed for such a large-scale restoration project. Similarly, Virginia’s program used $6,000 of its CZMP funding to leverage staff from six partner organizations to plan and conduct a Marine Debris Summit that laid the groundwork for developing a marine debris plan and establish priorities for future work, which state program officials expect will serve as a model for other Mid-Atlantic states. Most of the state programs we reviewed also provide competitive grants or offer other assistance to leverage local resources to address coastal issues. For example, Florida’s program competitively awards a portion of its administrative funds annually through grants to coastal counties and municipalities for projects that help communities address a wide range of coastal issues, and these grants require local entities to match the state grants. Similarly, Maine’s program uses CZMP funds annually to provide competitive grants to coastal communities for planning activities that support harbor management and development or improve shoreline access, but actual implementation of the projects must be funded through other sources. NOAA’s two primary performance assessment tools, the CZMP performance measurement system and its state program evaluations, have limitations, even with changes NOAA has made since 2008, and NOAA uses the performance information it collects to a limited extent in managing the CZMP. We found that NOAA’s CZMP performance measurement system does not align with some key attributes of successful performance measures. In addition, in its method for selecting stakeholders to survey during state program evaluations, NOAA may be susceptible to collecting incomplete and biased information because, in part, it uses a single criterion to select stakeholders to survey. Furthermore, NOAA makes limited use of the performance information it collects—for instance, NOAA does not use data from its performance measurement system or its evaluations of state programs to improve implementation of the CZMP at the national level—and, as a result, may not be realizing the full benefit of collecting such information. NOAA’s CZMP performance measurement system, which the agency developed in response to congressional direction to assess the national impact of the CZMP, has limitations, even with changes the agency made to the system since our 2008 report. Specifically, NOAA has made changes to several aspects of the data collection and review components of its system, including the following: establishing a requirement, in 2010, that state programs submit documentation of source information to support their data submissions, such as documentation of the public access sites being reported for public access performance measures; refining, in 2009, 2010, and 2011, the names and definitions of some performance measures with the intention of clarifying the activities that a given measure is intended to capture; and issuing internal guidance, in 2010, for NOAA staff to review state- submitted data and accompanying documentation to ensure that only eligible activities are reported by the states, among other things. With these changes, the system aligns with some key attributes of successful performance measures. In our past work, we found that successful performance measures typically align with key attributes including reliability, clarity, balance, numerical targets, and limited overlap, among others (see app. III for a complete list of key attributes we identified). In our current review, we found that some of the changes NOAA made to its CZMP performance measurement system since 2008 are consistent with such key attributes. For example, NOAA’s requirement that state programs submit documentation of source information and its internal guidance for how staff are to review this documentation correspond with the key attribute of ensuring the reliability of performance measures. In addition, NOAA’s steps to refine the names and definitions of certain performance measures are demonstrative of the key attribute of clarity, meaning that measures are clearly stated and have names and definitions consistent with the methodology used to calculate them. On the other hand, we found limitations in the CZMP performance measurement system that did not align with the key attributes. For instance, in 2011, NOAA eliminated its coastal water quality focus area— corresponding to one of the six focus areas based on goals of the CZMP outlined in the act. In eliminating this focus area, NOAA removed five related performance measures; states continue to report on one measure related to coastal water quality, but do so under another focus area on coastal community development. Balance, or having a set of measures that cover a program’s various goals, is a key attribute of successful performance measures. We found that having measures that correspond to various program goals provided agencies with a complete picture of performance. NOAA officials indicated that they eliminated the coastal water quality focus area based on a 2011 performance measurement system workgroup’s recommendation to streamline the measurement system. They further explained that they took this action because state programs were no longer receiving coastal nonpoint program funding, which often funded activities in support of coastal water quality, and that activities under this focus area were often tied to the coastal community development focus area. In speaking with some state program officials, however, we found that improving coastal water quality remains a priority for their programs even without coastal nonpoint program funding. Similarly, representatives from the Coastal States Organization’s coastal water quality workgroup indicated that many state programs have made progress in developing and implementing coastal nonpoint pollution control programs, but that these results are not quantified by NOAA. In addition, NOAA has not established numerical targets for the measures in its CZMP performance measurement system for the purpose of tracking progress or assessing performance of the CZMP. Our past work found that numerical targets are a key attribute of successful performance measures because they allow managers to compare planned performance with actual results. In 2008, we recommended that NOAA establish numerical targets for performance measures that would help track progress toward meeting program goals and help assess the overall CZMP effectiveness. NOAA’s 2011 performance measurement system workgroup also recommended that NOAA set targets to help it more effectively measure and communicate CZMP performance. NOAA agreed with these recommendations, but it has not established numerical targets for the measures in its CZMP performance measurement system to assess CZMP performance. NOAA officials explained that state programs vary widely, making it difficult to set targets at the national level. Officials also said that they first need to review the performance measures before they assess the feasibility of developing numerical targets. NOAA officials added that NOAA has set numerical targets for four CZMP performance measures, which are included in Commerce’s department-wide goals related to environmental stewardship. NOAA officials told us that they considered historical performance measure data and state programs’ planned strategies when establishing these targets, but they do not use them to assess CZMP performance. We continue to believe that, without setting numerical targets for the CZMP performance measurement system, NOAA will not have a benchmark to help it determine the extent to which the CZMP may be meeting expectations. Finally, the CZMP performance measurement system includes performance measures that involve the collection of data by state programs that are already available to NOAA from other sources. Limited overlap, another key attribute of successful performance measures, notes that measures should produce new information beyond what is provided by other data sources and that redundant or unnecessary performance information costs resources and clouds the bottom line by making managers sort through excess information. We found that the CZMP performance measurement system includes at least two financial measures whereby states collect and submit financial expenditure data similar to data states already provide NOAA through their cooperative agreements. NOAA officials told us that, in developing the CZMP performance measurement system, they anticipated that including such measures would be useful for tracking the amount of CZMP funding used in different focus areas each year. However, NOAA used the financial information from its CZMP performance measurement system to prepare a one-time summary of performance measure data published in 2013. In contrast, it uses financial information drawn from cooperative agreements on an annual basis to analyze states’ planned uses of CZMP funding. NOAA officials acknowledged that they may need to review the utility of requiring state programs to collect financial expenditure data for the performance measurement system. By requiring states to collect and submit financial data similar to data that they already provide in their cooperative agreements and making limited use of these data, NOAA may be unnecessarily burdening state programs with data collection requirements. Several state program officials we interviewed told us that collecting data for the numerous data elements under the 17 performance measures is a time- and resource-intensive activity, with a few stating that this is particularly true relative to the amount of CZMP funds they receive. Some indicated, for instance, that they spend 30 staff days or more per year collecting these data. State officials said that, in particular, data for the financial measures are among the most time-consuming to collect and report to NOAA. Other state officials told us that collecting data on the number of educational and training events and participants for each focus area is especially time-consuming, with one official noting that collecting data on number of participants is particularly burdensome when events are hosted by parties other than the program itself. NOAA officials told us they recognized the need to continue to review and potentially streamline or revise the CZMP performance measurement system, and that they intend to do so once the merger of OCRM and the Coastal Services Center is complete, which they expect to occur by the end of 2014. In the interim, NOAA officials said they initiated at the beginning of fiscal year 2014 an effort to assess all performance measures collected by the various programs within the two offices, including the CZMP, to determine which measures may be most effective in tracking and communicating progress toward goals identified in the merged office’s strategic plan. NOAA officials said they are committed to developing a strong framework for evaluating the performance of all programs under its merged coastal management office. However, the agency has not documented the approach it plans to take for these efforts. Federal internal control standards state the need for federal agencies to establish plans that encompass actions the agency will take to help ensure goals and objectives can be met. Without a documented approach for how it plans to assess its CZMP performance measurement system—including the scope and criteria it will use, such as how it will ensure its measures align with key attributes of successful performance measures—NOAA cannot demonstrate that its intended effort will improve its CZMP performance measurement system. In 2013, NOAA revised its process for conducting state program evaluations, which are required under the Coastal Zone Management Act to assess state programs’ adherence to the act’s requirements, but we identified a limitation in NOAA’s method for sampling stakeholders under the revised process. According to NOAA documents, the purpose of the revisions was to conduct evaluations more efficiently, at a reduced cost, while continuing to meet evaluation requirements outlined in the act. In revising its state program evaluations, NOAA made changes in the timing and methods for collecting information from participating states (see table 4). A NOAA official estimates that the agency’s revised evaluation process will save the agency approximately $236,000 annually. NOAA began evaluating state programs using its revised process at the beginning of fiscal year 2014 with evaluations of seven state programs. We did not evaluate NOAA’s implementation of its revised state program evaluations because NOAA had not completed its first cycle at the time of our review and, therefore, it was too early to assess the effectiveness of its revisions. However, we did assess NOAA’s revised evaluation design against our and others’ work on program evaluations to identify standards for strong evaluation design. We were unable to evaluate the qualitative components of its revised evaluation design—including the change in the scope of the evaluations from NOAA’s review of all aspects of each state program to a review of a few areas determined by NOAA—because the results of using these methods cannot be fully assessed until the evaluations have been conducted. But, we did evaluate the steps NOAA laid out in its guidance on its methods for collecting information and identified a limitation in its method for sampling stakeholders to survey. Under its revised evaluation process, NOAA relies in part on information obtained through stakeholder surveys, but we found that through its method of sampling stakeholders to survey, the agency may be susceptible to collecting incomplete and biased information. According to NOAA guidance on its revised evaluations, stakeholder surveys are intended to provide information about stakeholders’ perspectives and opinions across a range of topics, from a state program’s top three strengths and weaknesses to opportunities for improving a program’s federal consistency and permitting processes. The guidance states that NOAA will use stakeholder survey responses to identify evaluation target areas, as well as obtain information about the extent to which a state program is performing effectively in areas outside of the target areas. NOAA officials indicated that they plan to analyze survey results by collating respondents’ answers to identify common themes. NOAA evaluators will identify a sample of stakeholders to survey from 12 categories of organizations that stakeholders represent, including federal agencies, state agencies, nonprofit organizations, academic institutions, and local businesses and industries. According to NOAA officials, they adopted the criterion of stakeholder categories to ensure that stakeholders whose views were not consistently represented in the former evaluations—such as those from local businesses and industries—are included in evaluations conducted under the revised process. NOAA evaluators will select stakeholders to survey from these 12 categories from a list of potential stakeholders to survey compiled by state program officials and NOAA specialists working with the state. According to the Office of Management and Budget’s Standards and Guidelines for Statistical Surveys, a survey sampling method should yield the data required to meet the objectives of the survey. Our previous work has found that strong program evaluations rely on data that sufficiently reflect the activities and conditions a program is expected to address. Because NOAA’s stakeholder sampling method is guided by one criterion—categories of stakeholder organizations—NOAA may not collect information that reflects the various activities and aspects of the state programs. Specifically, under the act, NOAA is required to evaluate the extent to which state programs have addressed coastal management needs reflecting the six focus areas based on the goals identified in the act. In the absence of additional criteria for selecting stakeholders to survey, NOAA may select a sample of stakeholders whose work with a state program does not span all of the act’s goals, potentially leaving NOAA without information to inform its evaluation of a state’s performance on one or more goals. Such an information gap could be significant because stakeholder surveys are intended to be a main source of information on how well a program is performing in areas beyond those identified as target areas. Furthermore, when using a nonprobabilistic sampling method, such as that being employed by NOAA for its stakeholder surveys, the Office of Management and Budget’s survey guidelines state that agencies should demonstrate that they used an impartial, objective method to include or exclude people or organizations from a sample. Our previous work on program evaluation also found that evaluation data should be sufficiently free of bias or other errors that could lead to inaccurate conclusions. Because state program officials responsible for identifying potential stakeholders to survey have a vested interest in their programs, NOAA’s process is susceptible to collecting biased information. NOAA specialists who work with state programs also contribute to the selection process. However, we found that some NOAA specialists are not regionally located or have worked with a state program for a short period of time and, therefore, their knowledge or experience to inform the selection process may be limited. NOAA’s evaluation guidance recognizes the need to assess its revised process in the future and states that the agency plans to evaluate the effectiveness and efficiency of its revised state program evaluation process after conducting 8 to 10 evaluations. We found that in managing the CZMP, NOAA makes limited use of the performance information it collects. Our past work has found that performance information can be used across a range of management functions to improve programs and results, including to (1) identify problems or weaknesses in programs and take corrective actions, (2) set program priorities and develop strategies, (3) recognize and reward organizations who meet or exceed expectations, and (4) identify and share effective approaches to program implementation. For example, our previous work found that the Department of Labor effectively used performance measure data to identify technical assistance needs of state programs and to then provide assistance to try to improve performance. The department also used performance measure data as a basis for providing financial incentives to state programs that receive federal grants. We found that agencies realize the full benefit of collecting performance information only when they use such information to make decisions designed to improve results. NOAA collects performance information through its CZMP performance measurement system, state program evaluations, and other sources, but we found that the agency generally does not use the information it collects to help manage the CZMP at a national level. Specifically, we found the following: NOAA uses its CZMP performance measurement system data to report on national program accomplishments on a limited basis. In particular, in 2013, NOAA produced one report summarizing performance measurement system data from 2008 through 2011. However, NOAA has not published additional similar reports, and has not used performance measurement system data for other purposes. For example, the agency has not used the performance measurement system data to identify potential problems or weaknesses in the CZMP, set program priorities or strategies, or recognize and reward high-performing state programs—which may limit the usefulness of collecting such data. NOAA does not use its state program evaluations to assess the performance or improve the implementation of the CZMP at the national level. NOAA uses its state program evaluations to identify state-specific accomplishments and encourage or require the state under evaluation to make improvements or take corrective actions. But, according to NOAA officials, the agency does not regularly analyze findings from individual state evaluations to identify and share effective approaches across states or to identify common performance weaknesses that may warrant national focus or assistance. Our analysis of recent NOAA evaluations of the seven state programs we reviewed found that NOAA recommended the states undertake similar actions. In five of the seven state program evaluations, for example, NOAA recommended that programs undertake strategic planning, and for four of the seven programs, NOAA recommended that programs improve their coordination with local governments or other partners who help carry out coastal management activities. Yet NOAA has not analyzed these evaluations to identify common findings. One NOAA specialist we spoke with suggested that NOAA could also use the results of its state program evaluations to recognize and reward high-performing state programs. For instance, the NOAA specialist suggested that NOAA could modify its eligibility requirements for its Projects of Special Merit funding such that only high-performing programs, with any necessary actions from past state program evaluations fully implemented, would be eligible to receive funding. NOAA does not use performance-related information from other sources to support its management of the CZMP. NOAA uses state programs’ semiannual progress reports—which contain, among other things, “success stories,” or examples of a state program successfully addressing coastal management issues—to track states’ progress in implementing their cooperative agreements. However, NOAA does not use information from these reports to identify and promote effective approaches to coastal management by regularly sharing states’ success stories across states or with other stakeholders. The 2011 performance measurement system workgroup composed of NOAA and state program officials recommended that NOAA develop a website to share success stories on an annual basis. NOAA did not implement this recommendation because, according to NOAA officials, at that time it was incorporating success stories into a quarterly newsletter. According to a NOAA document, the agency produced the newsletter in response to requests from states for more information about how other state programs address coastal management issues. NOAA stopped issuing this newsletter in 2012, when its office merger began, and NOAA officials said they are now evaluating how the merged office might best share information about the CZMP across state programs and with other stakeholders. NOAA’s strategic plan for its merged coastal management office recognizes the importance of using and reporting performance information. According to this plan, NOAA is committed to maintaining a culture of monitoring and evaluation to improve the implementation of its programs. We found, however, that the strategic plan does not include a documented strategy for using the performance data NOAA collects through its CZMP performance measurement system, state program evaluations, or other sources of information, such as states’ semiannual progress reports, to manage the CZMP. NOAA officials told us that because the office merger is under way, they have not formulated a strategy for how the merged office will use performance data to inform and manage the CZMP, but they recognized the need to do so once the merger is complete. Federal control standards state the need for federal agencies to document management approaches to ensure goals and objectives can be met. Without a documented strategy for using the full range of performance information it collects, NOAA may not be taking full advantage of the performance information that its specialists, evaluators, and state program officials spend time and resources collecting, and it cannot ensure that it is realizing the full benefit of collecting such information, such as identifying common problems in state programs and taking corrective actions, setting national program priorities and developing strategies, recognizing state programs that exceed expectations, or identifying and sharing effective approaches to program implementation. Finally, NOAA has not taken steps to integrate data from its CZMP performance measurement system with information from its state program evaluations to develop a complete picture of the CZMP’s performance, as we recommended in our 2008 report. In 2008, we found that NOAA was not integrating quantitative national performance measure data with qualitative information from state program evaluations to develop a more comprehensive assessment of the CZMP’s performance. NOAA agreed with our recommendation to develop an approach for integrating the two types of information and, in response, tasked the 2011 performance measurement system workgroup with developing a method for better communicating performance measure data. The workgroup recommended a template for communicating program results that includes quantitative national performance measure data and qualitative success stories from states’ semiannual progress reports. However, NOAA has not drawn on this quantitative and qualitative information for purposes other than producing a report in 2013 summarizing performance measurement system data. Specifically, NOAA has not integrated quantitative and qualitative information to better understand program performance, improve its assessment of difficult-to-measure activities, or validate its assessments of program progress. We have previously found that agencies that used multiple sources of data to assess performance had information that covered more aspects of program performance than those that relied on a single source. We also found that agencies can improve their performance assessments by using program evaluation information to validate performance measurement system data. We continue to believe that developing an approach to combine performance information from its CZMP performance measurement system and state program evaluations could help NOAA obtain a more complete picture of CZMP performance. The CZMP plays an integral role in helping states protect, restore, and manage the development of the nation’s coastal resources and habitats. In managing the CZMP, NOAA is challenged with the task of assessing the performance of the program, composed of partnerships with 34 individual states, each with unique coastal habitats, and differing laws, organizational structures, and funding priorities. NOAA is to be commended for its progress in improving its two primary performance assessment tools—its CZMP performance measurement system and state program evaluations—since we last reviewed the agency’s performance assessment processes in 2008. We are encouraged by NOAA’s recognition of the importance of using performance information to improve the implementation of the CZMP. However, NOAA does not use or have a documented strategy for how it will use the performance information it collects from its CZMP performance measurement system, state program evaluations, or other sources of performance-related information, as appropriate, to aid its management of the CZMP. Without a documented strategy for using the range of its performance information, NOAA cannot ensure that it is collecting the most meaningful information and realizing the full benefit of the significant amount of information it and the states collect, such as identifying common problems in state programs and taking corrective actions, setting national program priorities and developing strategies, recognizing state programs that exceed expectations, or identifying and sharing effective approaches to program implementation. We also are encouraged by NOAA’s intentions to review and possibly revise the CZMP performance measurement system once its new coastal office is in place, but the agency has yet to document the approach it plans to take—including the scope and criteria it will use for this effort. In the absence of a documented approach indicating how it will review its performance measurement system, NOAA cannot ensure that its upcoming effort will take into consideration key attributes of successful performance measures, including balance and limited overlap, or result in a system that provides meaningful information that can be used by NOAA to determine how effectively the CZMP is performing relative to its goals. We are further encouraged by NOAA’s commitment to evaluate the effectiveness and efficiency of its revised state program evaluation process and to modify it, as needed, as it moves forward with its implementation. In the interim, however, NOAA’s method for selecting stakeholders to survey during state program evaluations—which relies on a single criterion and on state program officials who have a vested interest in the program—may result in the collection of incomplete or biased information that does not ensure perspectives are gathered from stakeholders representing a variety of program goals and are collected in an objective manner, potentially undermining the sufficiency and credibility of the data the produces. In the absence of additional criteria for selecting stakeholders to survey, NOAA may select a sample of stakeholders whose work with a state program does not span the act’s six focus areas or who present less-than-objective assessments of a state program. To ensure that NOAA collects and uses meaningful performance information to help manage the CZMP, including continuing to improve its CZMP performance measurement system and its state program evaluations, we are recommending that the Secretary of Commerce direct the Administrator of NOAA to take the following three actions: Develop a documented strategy to use the range of performance information the agency collects, as appropriate, to aid its management of the CZMP, such as to identify potential problems or weaknesses in the CZMP; set program priorities or strategies; or recognize and reward high-performing state programs. As part of its intended review of the CZMP performance measurement system and in consideration of how it intends to use the performance information, document the approach it plans to take to analyze and revise, as appropriate, the performance measures, and in so doing ensure the analysis considers key attributes of successful performance measures, such as balance and limited overlap. Revise the sampling methodology for selecting stakeholders to survey—included as part of its state program evaluation process—to ensure perspectives are gathered from stakeholders representing a variety of program goals and are collected in an objective manner. We provided a draft of this report to the Department of Commerce for review and comment. In written comments provided by NOAA through Commerce (reproduced in appendix IV), NOAA generally agreed with our findings and concurred with our recommendations. NOAA also provided technical comments that we incorporated, as appropriate. In its comment letter, NOAA stated that while it found GAO’s evaluation of the CZMP performance measurement system accurate, the agency did not agree with GAO’s assessment that eliminating a stand-alone category for coastal water quality could negatively affect the system’s ability to reflect the goals of the CZMA in a balanced way. NOAA stated that removal of the coastal water quality focus area did not impair its ability to track progress in meeting the water quality goal of the CZMA, explaining that it retained one measure composed of two data elements related to coastal water quality, but housed under a different focus area. We agree that the two-part measure NOAA maintained related to coastal water quality may provide important information on performance in this area. However, we continue to believe that the information it is collecting related to coastal water quality may not be balanced in comparison to the information it is collecting for the other five focus areas, which could in turn result in inconsistent performance information when looking across the six focus areas of the program. NOAA concurred with the three recommendations in the report and described actions it plans to address them. With regard to the first recommendation, NOAA stated that it plans to develop a strategy for using performance information it collects, including information from its performance measurement system, evaluations of state programs, performance reports, and other sources, and noted that it will build upon existing efforts to share lessons-learned regarding successful approaches or shared challenges across the national program. In addressing our second recommendation, on documenting its approach for analyzing and revising, as appropriate, the performance measures, NOAA stated that it plans to conduct a review of CZMP performance measures in fiscal year 2015 as part of its ongoing analysis of performance measures for programs under its new coastal office. In response to our third recommendation, NOAA stated that it will revise its sampling methodology to ensure stakeholders representing a variety of program goals are selected. We are sending copies of this report to the Secretary of Commerce, the appropriate congressional committees, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. Focusing on National Coastal Zone Management Program (CZMP) activities since our 2008 report, our objectives were to examine (1) how participating states allocated CZMP funds awarded in fiscal years 2008 through 2013 and (2) how the National Oceanic and Atmospheric Administration’s (NOAA) primary performance assessment tools have changed and the extent to which NOAA uses performance information to help manage the CZMP. To examine how participating states allocated CZMP funds awarded in fiscal years 2008 through 2013, we reviewed the Coastal Zone Management Act and related regulations and guidance, including NOAA funding guidance and allocation memos. We analyzed NOAA data on federal funds awarded by state and by funding type from fiscal years 2008 to 2013, and we compared this data against annual NOAA funding guidance and allocation memorandums to states. Based on our analysis, and interviews with NOAA officials, we found the data to be sufficiently reliable. We reviewed NOAA’s analysis of states’ allocations of CZMP funding for fiscal years 2008 through 2013, which was based on NOAA’s review of its cooperative agreements for federal funding with states. NOAA’s analysis involved the categorization of states’ funding allocations for projects into six focus areas based on the goals of the act and an additional state program management category as defined by NOAA to cover administrative costs, such as general program operations, supplies, and rent. NOAA officials noted that total funding allocation amounts are approximate and that many CZMP funded activities could address more than one focus area. For example, Maine state program officials told us their activities to conserve and enhance properties that provide commercial fishing access address both coastal community development and public access focus areas. To address this challenge, NOAA developed written guidance for NOAA specialists who conduct the analysis that specifies the types of activities to include in each focus area and the state program management category, as well as direction on how to categorize funds in cases where a project or activity may fall in more than one category. For instance, NOAA defined funds in the government coordination focus area to include, among others, activities that involved coordination with other government agencies and stakeholders, technical assistance to local governments, or public outreach and education activities only if they did not correspond to other focus areas. To determine the reliability of NOAA’s analysis, we interviewed knowledgeable NOAA officials, reviewed NOAA’s process for categorizing proposed activities and projects, including its written guidance on categorizing CZMP-funded activities and its steps to compare funding amounts to ensure that the double-counting of funds did not take place. We did not independently verify the results of NOAA’s analysis, but we verified major categories used in NOAA’s analysis for consistency across years, checked the total allocated funds in NOAA’s analysis against total federal funding award data, and reviewed NOAA’s categorization of a small sample of projects. We concluded the data to be sufficiently reliable for our purposes of reporting states’ allocated uses of CZMP funds. We also reviewed data from NOAA’s CZMP performance measurement system from 2008 through 2013 (the most recent years for which data was available) to further illustrate how CZMP funds were used. To assess the reliability of NOAA’s CZMP performance measurement system data, we interviewed NOAA officials about reliability of the data and reviewed corresponding documentation including performance measures guidance to states and internal guidance to NOAA specialists about their required reviews of data submitted. We did not independently verify performance measure data submitted by state programs, but based on our review of steps taken by NOAA to review state-submitted data, we found the data sufficiently reliable for the purposes of our report. To examine how NOAA’s primary performance assessment tools have changed since 2008, and the extent to which NOAA uses performance information to help manage the CZMP, we analyzed applicable laws and guidance including the act, and NOAA’s guidance on its CZMP performance measurement system and state program evaluations. We reviewed documentation on changes NOAA has made to these two performance tools, including steps taken to address our 2008 report recommendations, and we interviewed NOAA officials about the changes they made and their use of performance information. We reviewed GAO’s work on performance measurement to identify key attributes associated with successful performance measures and assessed NOAA’s CZMP performance measurement system against these attributes by reviewing the agency’s performance measures and guidance on the system and interviewing NOAA and state program officials. We also analyzed NOAA’s CZMP performance measurement system data from 2011, 2012, and 2013. We reviewed our and others’ work on program evaluations to identify standards for strong evaluation design and assessed NOAA’s process for evaluating state coastal programs against these standards by examining NOAA’s evaluation guidance and interviewing NOAA officials. We examined information NOAA maintains on CZMP performance including fact sheets, states’ cooperative agreements, semiannual progress reports, performance measurement system data submitted by states, and state program evaluation reports. In conducting our work on both objectives, we interviewed representatives of the Coastal States Organization, a nonprofit organization that represents coastal states on legislative and policy issues, as well as state program officials from the seven states that received the most fiscal year 2012 CZMP funding in each of NOAA’s seven regions (California, Florida, Hawaii, Maine, Michigan, Texas, and Virginia) about how states used CZMP funds and for their perspectives on NOAA’s management and assessment of the overall national program. We also reviewed the seven states’ cooperative agreements and semiannual progress reports for fiscal years 2011 and 2012 (the most recent years for which reports were available) to learn about projects undertaken by these seven states. We selected one CZMP-funded project in each of the seven states to further determine and illustrate how states used funds on a project-level basis and to learn about how the results of a select project are captured by NOAA’s performance assessment tools. In selecting projects to review, we considered the amount of CZMP funds allocated to specific projects, funding type, project type (e.g., projects that provide financial and technical assistance to local governments, planning projects, construction-related projects, permitting activities), and focus area (e.g., coastal habitat, government coordination). Our review of the states’ information cannot be generalized across all states or projects. We also interviewed coastal program officials from American Samoa and the Northern Mariana Islands to obtain perspectives from territories on NOAA’s performance assessment tools and territories’ use of this performance information. We conducted two site visits to observe and learn more about CZMP projects—one to a coastal habitat restoration project in Texas and one to an ocean planning project in Virginia. We selected these projects for site visits considering project type, focus area addressed, and geographic location. During our site visits, we met with state program officials and also interviewed stakeholders involved in the selected projects, as well as stakeholders involved in other CZMP-funded projects. In Texas, we met with the nonprofit organization managing the coastal habitat restoration project and toured the restoration site; in Virginia, we visited a public access enhancement project that received CZMP funding. We conducted this performance audit from June 2013 to July 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The National Oceanic and Atmospheric Administration’s (NOAA) CZMP performance measurement system is organized by broad focus areas that are related to five of the six primary focus areas based on the goals of the CZMP as outlined in the Coastal Zone Management Act. The system consists of 17 performance measures—15 of the 17 measures are organized under the five broad focus areas (NOAA removed the sixth focus area, coastal water quality, from its performance measurement system in 2011 in response to a performance measurement system workgroup’s recommendation to streamline the system), and the remaining 2 measures are to track state financial expenditures. Each of the 17 measures is composed of several individual data elements. For example, the performance measure on federal consistency is composed of two data elements that track the number of projects reviewed and the number of projects modified under states’ federal consistency review processes. In addition, some data elements are further broken down into specific categories, such as types of federal consistency projects modified. See table 5 for a list of the performance measures and supporting data elements and categories, as reported by participating state programs for 2011 through 2013. Definition Measure is aligned with division and agency-wide goals and mission and clearly communicated throughout the organization. Potentially adverse consequences of not meeting attribute Behaviors and incentives created by measures may not support achieving division or agency-wide goals or mission. Measure is clearly stated and the name and definition are consistent with the methodology used to calculate it. Data may confuse or mislead users. Measure has a numerical target. Managers may not be able to determine whether performance is meeting expectations. Measure is reasonably free from significant bias or manipulation. Performance assessments may be systematically over- or understated. Measure produces the same result under similar conditions. Reported performance data may be inconsistent and add uncertainty. Measures cover the activities that an entity is expected to perform to support the intent of the program. Information available to managers and stakeholders in core program areas may be insufficient. Measure provides new information beyond that provided by other data sources. Manager may have to sort through redundant, costly information that does not add value. Taken together, measures ensure that an organization’s various priorities are covered. Measures may over emphasize some goals and skew incentives. Each measure should cover a priority such as quality, timeliness, and cost of service. A program’s overall success is at risk if all priorities are not addressed. In addition to the individual named above, Alyssa M. Hundrup (Assistant Director), Elizabeth Beardsley, Mark A. Braza, Elizabeth Curda, John Delicath, Tom James, Katherine Killebrew, Patricia Moye, Dan Royer, Kiki Theodoropoulos, and Swati Sheladia Thomas made key contributions to this report. | The U.S. coast is home to more than half the U.S. population and integral to the nation's economy. Under the Coastal Zone Management Act, NOAA administers the CZMP, a federal-state partnership that encourages states to balance development with protection of coastal zones in exchange for federal financial assistance and other incentives. In 2008, GAO reviewed the CZMP and recommended improvements for CZMP performance assessment tools. A fiscal year 2013 appropriations committee report mandated GAO to review NOAA's implementation of the act. This report examines (1) how states allocated CZMP funds awarded in fiscal years 2008 through 2013 and (2) how NOAA's primary performance assessment tools have changed since GAO's 2008 report and the extent to which NOAA uses performance information in managing the CZMP. GAO reviewed laws, guidance, and performance-related reports; analyzed CZMP funding data for fiscal years 2008-2013; and interviewed NOAA officials and a nongeneralizeable sample of officials from seven states selected for receiving the most fiscal year 2012 funding in each of NOAA's regions. During fiscal years 2008 through 2013, the 34 states participating in the National Oceanic and Atmospheric Administration's (NOAA) National Coastal Zone Management Program (CZMP) allocated nearly $400 million in CZMP funds for a variety of activities. States allocated this funding for activities spanning six broad focus areas based on goals outlined in the Coastal Zone Management Act. For example, states allocated about a quarter of their CZMP funding to the coastal habitat focus area, according to NOAA's analysis. Coastal habitat activities encompassed a variety of actions to protect, restore, or enhance coastal habitat areas, such as habitat mapping or restoration planning efforts of marsh habitats for fish and wildlife and enhanced recreational opportunities. NOAA's two primary performance assessment tools—its CZMP performance measurement system and state program evaluations—have limitations, even with changes NOAA made since 2008, and NOAA makes limited use of the performance information it collects. Regarding the performance measurement system, NOAA has made changes such as taking steps intended to improve the reliability of data it collects. However, its current measurement system does not align with some key attributes of successful performance measures, including the following: Balance: a balanced set of measures ensures that a program's various goals are covered. NOAA removed the coastal water quality focus area, one of six focus areas based on goals in the act, to streamline the performance measurement system. As a result, the system may not provide a complete picture of states' overall performance across all focus areas based on goals in the act. Limited overlap: measures should produce new information beyond what is provided by other data sources . NOAA's system includes measures that overlap with financial data provided in cooperative agreements. By requiring states to submit financial data available through other sources, NOAA may be unnecessarily burdening states with data collection requirements. NOAA plans to review and potentially revise its measurement system, but it has not documented the approach it plans to take, including how the measures will align with key attributes of successful performance measures. Regarding state program evaluations, in 2013, NOAA revised its process to conduct evaluations more efficiently, at a reduced cost. However, GAO identified a limitation in NOAA's method for sampling stakeholders to survey under its revised process that may result in the selection of stakeholders that do not span all six focus areas based on goals of the act. Finally, NOAA makes limited use of the performance information it collects from these tools. For example, since it began collecting performance measurement data in 2008, NOAA used the data once to report on accomplishments. NOAA recognizes the importance of using performance information to improve program implementation, but it has not documented a strategy for how it will use its performance information to manage the program. As a result, NOAA may not be realizing the full benefit of collecting performance information. GAO recommends that NOAA document an approach to analyze and revise, as appropriate, its performance measures against key attributes, revise its process for selecting stakeholders to survey in its state program evaluations, and document a strategy for using the performance information it collects. NOAA concurred with the recommendations. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
DOD oversees a worldwide school system to meet the educational needs of military dependents and others, such as the children of DOD’s civilian employees overseas. The Department of Defense Education Activity (DODEA) administers schools both within the United States and overseas. In school year 2006-07, DODEA had schools within 7 states, Puerto Rico, Guam, and in 13 foreign countries. DOD has organized its 208 schools into three areas: the Americas (65), Europe (98), and Pacific (45). Almost all of the domestic schools are located in the southern United States. The overseas schools are mostly concentrated in Germany and Japan, where the U.S. built military bases after World War II. Given the transient nature of military assignments, these schools must adapt to a high rate of students transferring into and out of their schools. According to DOD, about 30 percent of its students move from one school to another each year. These students may transfer between DOD schools or between one DOD school and a U.S. public school. Although DOD is not subject to the No Child Left Behind Act of 2001 (NCLBA), it has its own assessment and accountability framework. Unlike public schools, DOD schools receive funding primarily from DOD appropriations rather than through state and local governments or Department of Education grants. U.S. public schools that receive grants through the NCLBA must comply with testing and reporting requirements designed to hold schools accountable for educating their students and making adequate yearly progress. DOD has adopted its own accountability framework that includes a 5-year strategic plan, an annual report that measures the overall school system’s progress, and data requirements for school improvement plans. The strategic plan sets the strategic direction for the school system and outlines goals and performance measures to determine progress. In annual reports, DOD provides a broad overview of its students’ overall progress, including the results of standardized tests. On DOD’s Web site, DOD publishes more detailed test score results for each school at each grade level. DOD also requires each school to develop its own improvement plan that identifies specific goals and methods to measure progress. School officials have the flexibility to decide what goals to pursue but must identify separate sources of data to measure their progress in order to provide a more complete assessment. For example, if a school chooses to focus on improving its reading scores, it must identify separate assessment tests or other ways of measuring the progress of its students. DOD is subject to many of the major provisions of IDEIA and must include students with disabilities in its standardized testing. However, unlike states and districts subject to NCLBA, DOD is not required to report publicly on the academic achievement of these students. States and public school districts that receive funding through IDEIA must comply with various substantive, procedural, and reporting requirements for students with disabilities. For example, they must have a program in place for evaluating and identifying children with disabilities, developing an individualized education program (IEP) for such students, and periodically monitoring each student’s academic progress under his or her IEP. Under IDEIA, children with disabilities must be taught, to the extent possible, with non-disabled students in the least restrictive environment, such as the general education classroom, and must be included in standardized testing unless appropriate accommodations or alternate assessments are required by their IEPs. Although DOD schools do not receive funding through IDEIA, they generally are subject to the same requirements concerning the education of children with disabilities. However, unlike states and districts that are subject to NCLBA, DOD schools are not required to report publicly on the performance of children with disabilities on regular and alternate assessments. Definitions of dyslexia vary from broad definitions that encompass almost all struggling readers to narrow definitions that only apply to severe cases of reading difficulty. However, DOD and others have adopted a definition developed by dyslexia researchers and accepted by the International Dyslexia Association, a non-profit organization dedicated to helping individuals with dyslexia. This definition describes dyslexics as typically having a deficit in the phonological component of language, the individual speech sounds that make up words, which typically causes difficulty with accurate or fluent word recognition, poor spelling ability, and problems in reading comprehension that can impede growth of vocabulary. Recent research has identified a gene that may be associated with dyslexia and has found that dyslexia often coincides with behavior disorders or speech and language disabilities and can range from mild to severe. Nevertheless, the percentage of people who have dyslexia is unknown with estimates varying from 3 to 20 percent, depending on the definition and identification method used. Research promotes early identification and instruction for dyslexics to help mitigate lifelong impacts. DOD offers professional development to all staff to help them support students who struggle to read, including those who may have dyslexia, and used designated funds to supplement existing training efforts across its schools. This professional development prepares teachers to assess student literacy skills and provides strategies to help instruct struggling readers. DOD used funds designated to support students with dyslexia for the development of two new online training courses containing modules on dyslexia, for additional seats in existing online courses, and for additional literacy assessment tools. DOD offers professional development to all staff who teach struggling readers, including students who may have dyslexia, primarily through online courses. The department offers online training courses through a professional development series known as Scholastic RED. These courses are DOD’s primary professional development on literacy for general education teachers. According to DOD, the department began offering the courses during the 2003-04 school year. DOD officials told us that since that time about half of the nearly 8,700 teachers in DOD schools have taken at least one Scholastic RED online course. Of the school principals who responded to our survey, almost all indicated that some of their staff members, including administrators and general and special education teachers, had participated in Scholastic RED training. Beyond Scholastic RED courses, DOD officials we interviewed told us that general education teachers also receive literacy development through instructional training in subject areas other than reading. For example, professional development on teaching at the middle school level may include guidance on how to enhance students’ reading skills through the study of a particular science. Most professional development for staff working with struggling readers focuses on the assessment of student literacy skills and presents strategies for instructing students who struggle to read, some of whom may have dyslexia. Scholastic RED online courses train teachers in five basic elements of reading instruction: phonemic awareness, comprehension, phonics, fluency, and vocabulary. Research suggests that both phonics and phonemic awareness pose significant challenges to people who have dyslexia. According to course implementation materials, the training is designed to move beyond online course content and allow participants the opportunity to apply new skills in site-based study groups as well as in the classroom. Some principals and teachers indicated their schools follow this model with groups of teachers meeting to discuss best practices for applying Scholastic RED knowledge and resources in their classrooms. DOD districts and schools sometimes offer their own literacy training through a localized effort or initiative. Professional development unique to a DOD district or school may be offered by a district’s special education coordinator. For example, the special education coordinator in a domestic district told us she offers literacy training to all staff, explaining that she tries to create a broader base of professionals who can more accurately identify and instruct students who are struggling readers. Regarding overseas schools, administrators in Korea told us they offer in-service workshops to help teachers improve student literacy, reading comprehension, and writing. DOD designed and provided additional training on literacy instruction for most special education teachers and other specialists under a special education initiative. The training provided these staff members with courses on how students develop literacy skills and how to teach reading across all grade levels. According to a 2004-05 DOD survey on the initiative, over half of special educators and other specialists said they had completed this training. Since the 2003-04 school year, special education teachers and other specialists have received training on topics such as the evaluation of young children’s literacy skills and adjusting instruction based on student performance. The department also provided speech and language pathologists specialized training to help them assist struggling readers, including guidance on basic elements of literacy instruction and development, such as phonological awareness and vocabulary development. DOD offers another literacy professional development program for special education teachers and other specialists known as Language Essentials for Teachers of Reading and Spelling (LETRS). According to the department, LETRS is designed to give teachers a better understanding of how students learn to read and write, showing instructors how to use such knowledge to improve targeted instruction for every type of reader. According to our survey results, about 10 percent of schools had staff who had taken this course. The LETRS course is based on the concept that once teachers understand the manner in which students approach reading and spelling tasks, they can make more informed decisions on instructional approaches for all readers. Much like the other literacy training DOD offers, LETRS modules contain reading instruction approaches on areas that may present challenges for those who have dyslexia: phonemic awareness, vocabulary, and reading comprehension. Overall, DOD staff told us the literacy training the department offered was useful for them, with some indicating they wanted additional training. In responding to our survey, more than 80 percent of the principals who said their staff used Scholastic RED courses rated them as very useful for specialized instruction. Principals we interviewed told us their teachers characterize Scholastic RED concepts as practical and easy to apply in the classroom. While teachers we interviewed told us Scholastic RED training is helpful, some special education teachers indicated the course material is basic and better-suited to meet the developmental needs of general education teachers than special education teachers. For example, one special education teacher we spoke to said Scholastic RED courses do little to enhance the professional skills of special education teachers because many of these teachers have already received advanced training on reading interventions. Special education teachers did indicate, however, that training offered through the department’s special education initiative has provided them with identification strategies and intervention tools to support struggling readers. Regarding the impact of the initiative’s training, a DOD survey of special education teachers and other specialists found that over half of respondents said they had seen evidence of professional development designed to maximize the quality of special education services, and most had completed some professional development. The department did report, however, that respondents working with elementary school students frequently requested more training in areas such as phonemic awareness, while respondents working with high school students requested more professional development in a specific supplemental reading program used at DOD schools: Read 180. Moreover, teachers we interviewed in both foreign and domestic locations said they would like additional training on identifying and teaching students with specific types of reading challenges, including dyslexia. For example, one special education teacher we interviewed told us this specific training could help general education teachers to better understand the types of literacy challenges struggling readers face that in turn could help teachers better understand why students experience difficulties with other aspects of coursework. DOD reported it had fully obligated the $3.2 million designated for professional development on dyslexia, with about $2.9 million for online courses and literacy assessment tools. Between fiscal years 2004 and 2006, the conference committee on defense appropriations designated a total of $3.2 million within the operation and maintenance appropriation for professional development on dyslexia. As of September 2007, DOD reported it had obligated these funds for professional development in literacy, including online training courses containing components on dyslexia. Reported obligations also included tools to help teachers identify and support students who struggle to read, some of who may have dyslexia. DOD obligated the remaining designated funds for general operations and maintenance purposes. All related obligations, as reported by the department, are outlined in table 1. The online training included two newly developed courses that may be too new to evaluate and the purchase of extra seats in existing Scholastic RED training courses. The first of the new training courses to be fully developed was Fundamentals of Reading K-2. According to DOD, this course was designed to present teachers with strategies for instructing struggling readers in the early K-2 grade levels and contains six modules on the components of reading, including a specific module on dyslexia. The K-2 course was first made available in January 2006 to teachers who participated in a pilot project. DOD then opened the course to all teachers in February 2007. According to our survey results, 29 percent of the schools serving grades K-2 had used the course by the end of the school year. Nearly half of those school principals who indicated their staff used the course, however, did not indicate the extent to which it had been helpful in supporting struggling readers. It is possible the course is still too new for DOD schools to evaluate as some principals indicated on our survey that they had not heard of the course or they were not aware it was available to their staff. The second of the new online training courses, Fundamentals of Reading Grades 3-5, is not fully developed for use at this time. According to DOD officials, the course will be available to all staff in the 2007-08 school year and will also contain six modules on the components of reading, including a module on dyslexia. Additionally, DOD reported purchasing another 1,100 seats in selected Scholastic RED online training courses. The department also added a page entitled, Help your Students with Dyslexia to its main online resource site that is available to all teachers. DOD reported also using designated funds to purchase electronic literacy assessment tools and other instruments that were widely used in DOD schools, one of which received mixed reviews on its usefulness. DOD reported obligating about one-third of the designated funds for the Dynamic Indicators of Basic Early Literacy Skills (DIBELS) assessment tool. The DIBELS assessment allows a teacher to evaluate a student’s literacy skills in a one-on-one setting through a series of one-minute exercises that can be administered via pen and paper or through the use of a hand-held electronic device. By using the exercises, teachers can measure and monitor these students’ skill levels in concepts such as phoneme segmentation fluency, a reading component that often gives dyslexics significant difficulty. DIBELS was used to help identify struggling readers in at least half of the schools serving grades K-2, according to our survey results, and DOD plans to begin use of the assessment in additional locations during the 2007-08 school year. However, school officials and teachers had mixed reactions regarding the ease and effectiveness of using DIBELS to help identify struggling readers. In responding to our survey, about 40 percent of principals whose schools used DIBELS to help identify struggling readers indicated it was very or extremely useful, about 30 percent indicated it was moderately useful, and about 20 percent indicated it was either slightly or not at all useful. Several principals we surveyed indicated that they liked the instant results provided by the DIBELS assessment. For example, one principal called the assessment a quick and easy way to assess reading skills, saying it provides teachers with immediate feedback to help inform decisions about instruction. Others indicated the assessment is time-consuming for teachers. One kindergarten teacher we interviewed said that it is challenging to find the time to administer the test because it must be individually administered. Another principal expressed concern about the difficulty in using the electronic hand-held devices, saying the technology poses the greatest challenge to teachers in using the DIBELS assessment. According to DOD officials, the agency is currently evaluating its use of DIBELS, searching for other assessment tools, and will use the results to determine whether to continue using DIBELS or replace it with another tool. DOD purchased four other instruments to aid teachers in the evaluation of literacy skills; however, the tools are targeted to specific reading problems. According to DOD officials, they selected these tools because they measure specific skills associated with dyslexia. Table 2 shows reported use of each literacy assessment tool across DOD schools. DOD schools identify students who have difficulty reading and provide them with supplemental reading services. DOD uses standardized tests to determine which students are struggling readers, although these tests do not screen specifically for dyslexia. DOD then provides these students with a standard supplemental reading program. For those children with disabilities who meet eligibility requirements, DOD provides a special education program in accordance with the requirements of IDEIA and department guidance. Schools primarily determine students’ reading ability and identify those who struggle through the use of standardized assessments. DOD uses several standardized assessments, including the TerraNova Achievement Test, and identifies those students who score below a certain threshold as having the most difficulty with reading and in need of additional reading instruction. DOD requires that schools administer these reading assessments starting in the third grade. However, some schools administer certain assessments as early as kindergarten. For example, some schools used Dynamic Indicators of Basic Early Literacy Skills (DIBELS) to identify struggling readers in grades K-2. In an effort to systematically assess students in kindergarten through second grade, DOD plans to identify assessment tools designed for these grades during school year 2007-08 and require their use throughout the school system. In addition to assessments, schools also use parent referrals and teacher observations to identify struggling readers. Several school officials with whom we spoke said that parent feedback about their children to school personnel and observations of students by teachers are both helpful in identifying students who need additional reading support. Like many public school systems in the United States, DOD school officials do not generally use the term “dyslexia.” However, DOD officials told us they provided an optional dyslexia checklist to classroom teachers to help determine whether students may need supplementary reading instruction and if they should be referred for more intensive diagnostic screening. According to our survey results, 17 percent of schools used the checklist in school year 2006-07. DOD schools provide a supplemental reading program for struggling readers, some of whom may have dyslexia, a program that has some support from researchers and has received positive reviews from school officials, teachers, and parents we interviewed. The program, called READ 180, is a multimedia program for grades 3 through 12. It is designed for 90- minute sessions during which students rotate among three activities: whole-group direct instruction, small-group reading comprehension, and individualized computer-based instruction. The program is designed to build the reading skills, such as phonemic awareness, phonics, vocabulary, fluency, and comprehension. In responding to our survey, over 80 percent of the school principals indicated it was very helpful in teaching struggling readers. Several school administrators stated that it is effective with students due to the nonthreatening environment created by its multimodal instructional approach. Several teachers said the program also helped them to monitor student performance. Several parents told us that the program increased their children’s enthusiasm for reading, improved their reading skills, and boosted their confidence in reading and overall self- esteem. Some parents stated that their children’s grades in general curriculum courses improved as well since the children were not having difficulty with course content but rather with reading. At the secondary level, however, school officials stated that some parents chose not to enroll their child in READ 180 because of the stigma they associate with what they view as a remedial program. According to the Florida Center for Reading Research, existing research supports the use of READ 180 as an intervention to teach 6th, 7th, and 8th grade students comprehension skills, however; the center recommends additional studies to assess the program’s effectiveness. Certain districts and schools have implemented additional strategies for instructing struggling readers such as using literacy experts, offering early intervention reading programs, and prioritizing reading in annual improvement plans. In the Pacific region and the Bavaria district, literacy experts work in collaboration with classroom teachers and reading specialists to design appropriate individualized instruction for struggling readers and monitor student performance. All of the elementary schools in the Pacific region offer reading support to struggling readers. Some schools offer early reading support in grades K-2. Certain districts offer early intervention to first and second graders in small groups of five and eight students, respectively. Some schools in Europe provide intensive instruction to students in first grade through Reading Recovery, a program in which struggling readers receive 30-minute tutoring sessions by specially trained teachers for 12 to 20 weeks. According to the Department of Education’s What Works Clearinghouse, Reading Recovery may have positive effects in teaching students how to read. Several superintendents and principals we interviewed said that improved reading scores was one of the school’s goals in their annual school-improvement plan, which is in line with DOD’s strategic plan milestone of having all students in grades three, six, and nine read at their grade level or higher by July 2011. For example, to improve reading scores, officials in the Heidelberg District developed a literacy program requiring each school to identify all third grade students who read below grade level and develop an action plan to improve their reading abilities. Those students whose performance does not improve through their enrollment in supplemental reading programs or who have profound reading difficulties may be eligible to receive special education services. DOD provides this special education program in accordance with the requirements of department guidance and the IDEIA, although DOD is not subject to the reporting and funding provisions of the act. According to our survey results, almost all schools provided special education services in the 2006-2007 school year. The level of special education services available to students with disabilities varies between districts and schools, and may affect where some service-members and families can be assigned and still receive services. DOD established the Exceptional Family Member Program to screen and identify family members who have special health or educational needs. It is designed to assist the military personnel system to assign military service members and civilian personnel to duty stations that provide the types of health and education services necessary to meet their family members’ needs. In general, parents with whom we spoke said that they were pleased with the services their children received in DOD schools at the duty locations where they were assigned. DOD conducts a comprehensive multidisciplinary assessment to evaluate whether a student is eligible to receive special education services under any of DOD’s disability categories, and most parents we interviewed were complimentary of the program. A student who is identified as having a disability receives specific instruction designed to meet the student’s academic needs. A team comprised of school personnel and the student’s parents meets annually to assess the student’s progress. While the majority of parents we interviewed were complimentary of DOD’s special education program, a few expressed concern that their children were not evaluated for special education eligibility early enough despite repeated requests to school personnel that their children needed to be evaluated for a suspected disability. According to DOD officials, department guidance requires school officials to look into parent requests, but officials do not have to evaluate the child unless they suspect the child has a disability. However, they must provide parents with written or oral feedback specifying why they did not pursue the matter. Students with dyslexia may qualify for special education services under the specific learning disability category, but students must meet specific criteria. To qualify as having a specific learning disability, students must have an information-processing deficit that negatively affects their educational performance on an academic achievement test resulting in a score at or near the 10th percentile or the 35th percentile for students of above average intellectual functioning. There must also be evidence through diagnostic testing to rule out the possibility that the student has an intellectual deficit. DOD schools provide children with disabilities instruction through two additional programs that have some research support. Fifteen percent of our survey respondents were principals of schools that used the Lindamood Phoneme Sequencing Program (LiPS), a program that helps students in grades prekindergarten through 12 with the oral motor characteristics of individual speech sounds. According to the What Works Clearinghouse, one research study it reviewed in 2007 suggested the LiPS program may have positive effects on reading ability. Our survey results indicated that 37 percent of schools serving grades 7 through 12 used a program called Reading Excellence: Word Attack and Rate Development Strategies that targets students who have mastered basic reading skills but who are not accurate or fluent readers of grade-level materials. According to a Florida Center for Reading Research report, there is research support for the program, but additional research is needed to assess its effectiveness. DOD assesses the academic achievement of all students using standardized tests. The department administers the TerraNova Achievement Test to students in grades 3 through 11. Test scores represent a comparison between the test taker and a norm group designed to represent a national sample of students. For example, if a student scored at the 68th percentile in reading, that student scored higher than 68 percent of the students in the norm group–the national average is the 50th percentile. DOD uses these scores to compare the academic achievement of its students to the national average. In addition, DOD schools participate in the National Assessment of Educational Progress (NAEP), known as the nation’s report card, which provides a national picture of student academic achievement and a measure of student achievement among school systems. According to an agency official, DOD administers NAEP to all of its fourth and eighth grade students every other year. The NAEP measures how well DOD students perform as a whole relative to specific academic standards. Overall, DOD students perform well in reading compared to the national average and to students in state public school systems, as measured by their performance on standardized tests. The latest available test results showed that DOD students scored above average and in some cases ranked DOD in the top tier of all school systems tested. According to the 2007 TerraNova test results, DOD students scored on average between the 60th and 75th percentile at all grade levels tested. The 2007 NAEP reading test results ranked the DOD school system among the top for all school systems. Specifically, on the eighth grade test, DOD tied for first place with two states among all states and jurisdictions and on the fourth grade test, tied with one state for third place. All students, including those with disabilities, participate in DOD’s systemwide assessments using either the standard DOD assessment or alternate assessments. In some cases, students who require accommodations to complete the standard assessment may need to take the test in a small group setting, get extended time for taking the test, or have directions read aloud to them. Some students with severe disabilities may take an alternate assessment if required by the student’s individualized education program. An alternate assessment determines academic achievement by compiling and assessing certain documentation, such as a student’s work products, interviews, photographs, and videos. According to an official from DODEA’s Office of System Accountability and Research, DOD provides an alternate assessment to fewer than 200 of its roughly 90,000 students each year. For use within the department and in some districts and schools, DOD disaggregates TerraNova test scores for students with disabilities. DOD officials reported that they disaggregate scores for the entire school system, each area, and each district, in order to gauge the academic performance of students with disabilities. DOD’s policy states that DOD shall internally report on the performance of children with disabilities participating in its systemwide assessments. According to DOD officials, they use the data to determine progress toward goals and to guide program and subject area planning. According to our survey results, over 90 percent of DOD schools disaggregate their test scores by gender and race and about 85 percent disaggregate for students with disabilities for internal purposes. Some school officials told us they use test data in order to track students’ progress, assess the effectiveness of services they offer students, identify areas of improvement, and assess school performance. For example, one Superintendent who shared her disaggregated data with us showed how third-grade students with disabilities made up over half of those who read below grade level in her district. DOD does not generally report disaggregated test scores for students with disabilities. DOD’s annual report provides data at each grade level, and test scores posted on its Web site provide data for each school. DOD also reports some results by race and ethnicity for the NAEP test. However, DOD does not disaggregate its TerraNova test data for students with disabilities or other subgroups. A primary goal of its strategic plan is for all students to meet or exceed challenging academic content standards, and DOD uses standardized test score data to determine progress towards this goal. Disaggregating these data provides a mechanism for determining whether groups of students, such as those with disabilities, are meeting academic proficiency goals. However, unlike U.S. public school systems that are subject to the No Child Left Behind Act, DOD is not required to report test scores of designated student groups. According to DOD officials, they do not report test results for groups of fewer than 20 students with disabilities because doing so may violate their privacy by making it easier to identify individual students. Where there are groups of 20 or more students with disabilities, DOD officials said they do not report it publicly because it might invite comparisons between one school and another when all of them do well compared to U.S. public schools. DOD officials did not comment on any negative implications of such comparisons. On the whole, DOD students perform well in reading compared with public school students in the United States, and in some cases DOD ranks near the top of all school systems, as measured by students’ performance on standardized tests. DOD has programs and resources in place to provide supplemental instruction to students who have low scores on standardized tests or who otherwise qualify for special education services, some of whom may have dyslexia. The department generally includes these students when administering national tests. Nevertheless, by not reporting specifically on the achievement of students with disabilities, including those who may have dyslexia, DOD may be overlooking an area that might require attention and thereby reducing its accountability. Without these publicly reported data, parents, policymakers, and others are not able to determine whether students with disabilities as a whole are meeting academic proficiency goals in the same way as all other students in the school system. For example, high performance on the part of most DOD students could mask low performance for students with disabilities. To improve DOD’s accountability for the academic achievement of its students with disabilities, including certain students who may have dyslexia, we recommend that the Secretary of Defense instruct the Director of the Department of Defense Education Activity to publish separate data on the academic achievement of students with disabilities at the systemwide, area, district, and school levels when there are sufficient numbers of students with disabilities to avoid violating students’ privacy. We provided a draft of this report to DOD for review and comment. DOD concurred with our recommendation. DOD’s formal comments are reproduced in appendix II. DOD also provided technical comments on the draft report, which we have incorporated when appropriate. We will send copies of this report to the Secretary of Defense, the Director of the Department of Defense Education Activity, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me at (202) 512-7215 if you or your staff have any questions about this report. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributions to this report are listed in appendix III. Our objectives were to determine: 1) what professional development DOD provides its staff to support students with dyslexia and how the fiscal year 2004-to-2006 funds designated for this purpose were used, (2) what identification and instructional services DOD provides to students who may have dyslexia, and (3) how DOD assesses the academic achievement of students with disabilities, including dyslexia. To meet these objectives, we interviewed and obtained documentation from DOD and others, conducted a Web-based survey of all 208 DOD school principals, and visited or interviewed by phone officials and parents in six school districts. We conducted our work between January 2007 and October 2007 in accordance with generally accepted government auditing standards. To obtain information on how schools support students with dyslexia we interviewed officials from the Department of Defense Education Activity (DODEA) and the Department of Education, as well as representatives from the International Dyslexia Association and the National Association of State Directors of Special Education. We obtained several DODEA reports including: a 2007 report to Congress on DODEA’s efforts to assist students with dyslexia, a 2006 evaluation of DODEA’s English and language arts instruction, and a 2005 survey of DODEA special education personnel. We reviewed relevant federal laws, regulations, and DOD guidance, and also obtained information on DOD’s obligation and disbursement of funds designated for professional development on dyslexia. We also reviewed the DODEA web site for schools’ student performance data to determine how DOD assesses the academic achievement of students with disabilities. We also obtained summary reports on the scientific evidence regarding the effectiveness of DODEA’s supplemental reading programs from the Department of Education’s What Works Clearinghouse and the Florida Center for Reading Research, two organizations that compile and evaluate research on reading. To gather information concerning dyslexic students in DoDEA schools, including how DoDEA schools identify dyslexic students and the instructional services provided to such students, we designed a Web-based survey. We administered the survey to all 208 DODEA school principals between May 10, 2007 and July 6, 2007, and received completed surveys from 175 school principals—an 84 percent response rate. In order to obtain data for a high percentage of DOD schools, we followed up with principals through e-mail and telephone to remind them about the survey. We also examined selected characteristics to ensure that the schools responding to our survey broadly represent DODEA’s school levels, geographic areas, and special education population. Based on our findings, we believe the survey data are sufficient for providing useful information concerning students with dyslexia. Nonresponse (or, in the case of our work, those DOD school principals that did not complete the survey) is one type of nonsampling error that could affect data quality. Other types of nonsampling error include variations in how respondents interpret questions, respondents’ willingness to offer accurate responses, and data collection and processing errors. We included steps in developing the survey, and collecting, editing, and analyzing survey data to minimize such nonsampling error. In developing the web survey, we pretested draft versions of the instrument with principals at various American and European elementary, middle, and high schools to check the clarity of the questions and the flow and layout of the survey. On the basis of the pretests, we made slight to moderate revisions of the survey. Using a web- based survey also helped remove error in our data collection effort. By allowing school principals to enter their responses directly into an electronic instrument, this method automatically created a record for each principal in a data file and eliminated the need for and the errors (and costs) associated with a manual data entry process. In addition, the program used to analyze the survey data was independently verified to ensure the accuracy of this work. We visited school officials and parents of struggling readers in two of the three areas (the Americas and Europe) overseen by DODEA and contacted schools in the third area (the Pacific) by phone. For each location we interviewed the district Superintendent or Assistant Superintendent, school principals, teachers, and special education teachers. At each location we also interviewed parents of struggling readers. Each group had between two and seven parents, and in some cases we interviewed a parent individually. To see how DOD schools instruct struggling readers we observed several reading programs during classroom instruction including Read 180, Reading Recovery, and Reading Improvement, as well as the use of literacy tools such as the Dynamic Indicator of Basic Literacy Skills. We selected 6 of DOD’s 12 school districts, 2 from each area, using the following criteria: (1) geographic dispersion, (2) representation of all military service branches, (3) variety of primary and secondary schools, and (4) range in the proportion of students receiving special education services. Harriet Ganson, Assistant Director, and Paul Schearf, Analyst-In-Charge, managed this assignment. Farah Angersola and Amanda Seese made significant contributions throughout the assignment, and Rebecca Wilson assisted in data collection and analysis. Kevin Jackson provided methodological assistance. Susan Bernstein and Rachael Valliere helped develop the report’s message. Sheila McCoy provided legal support. | Many of our nation's military and civilian personnel depend on Department of Defense (DOD) schools to meet their children's educational needs. These schools provide a range of educational services including programs for students with disabilities and those who struggle to read, some of whom may have a condition referred to as dyslexia. To determine how DOD supports students with dyslexia and how it used $3.2 million in funds designated to support them, GAO was asked to examine: (1) what professional development DOD provides its staff to support students with dyslexia and how the fiscal year 2004-to-2006 funds designated for this purpose were used, (2) what identification and instructional services DOD provides to students who may have dyslexia, and (3) how DOD assesses the academic achievement of students with disabilities, including dyslexia. To address these objectives, GAO conducted a survey of all school principals and interviewed agency officials, school personnel, and parents in six school districts. DOD provides a mix of online and classroom training to teachers who work with students who struggle to read, and DOD used 2004-to-2006 funds designated for professional development on dyslexia, in particular, to supplement these efforts. Most of the online and classroom professional development prepares teachers and specialists to assess student literacy and provides them with strategies to teach students who have particular difficulties. For the 2004-to-2006 funding for professional development on dyslexia, DOD supplemented its existing training with online courses that include specific modules on dyslexia and tools to assess students' literacy skills. DOD identifies students who struggle to read--some of who may have dyslexia--through standardized tests and provides them with supplemental reading instruction. DOD uses standardized tests to screen its students and identify those who need additional reading instruction, but these schools do not generally label them as dyslexic. To teach students they identify as struggling readers, DOD schools primarily employ an intensive multimedia reading program that is highly regarded by the principals, teachers, and parents GAO interviewed. Those students whose performance does not improve through their enrollment in supplemental reading programs or who have profound reading difficulties may be eligible to receive special education services. DOD is subject to many of the requirements of the Individuals with Disabilities Education Improvement Act of 2004 on the education of students with disabilities. Students with dyslexia may qualify for these services, but they must meet program eligibility requirements. DOD uses the same standardized tests it uses for all students to assess the academic achievement of students with disabilities, including those who may have dyslexia, but does not report specifically on the outcomes for students with disabilities. A primary goal of DOD's strategic plan is for all students to meet or exceed challenging academic standards. To measure progress towards this goal, DOD assesses all students' academic achievement and school performance by comparing test scores to a national norm or to a national proficiency level. Overall, students perform well in reading compared to U.S. public school students. DOD disaggregates test scores for students with disabilities but does not report such information publicly. In contrast, U.S. public school systems under the No Child Left Behind Act of 2001 must report such data. Without this information, it is difficult for parents, policy makers, and others to measure the academic achievement of students with disabilities relative to all other students in the DOD school system. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
As has been reported by many researchers, some Gulf War veterans developed illnesses that could not be diagnosed or defined and for which other causes could not be specifically identified. These illnesses have been attributed to many sources, including a large number of unusual environmental hazards found in the Gulf. The Congress enacted the Persian Gulf War Veterans’ Benefits Act (P.L. 103-446, Nov. 2, 1994) which, among other things, allowed VA to pay disability compensation to veterans suffering from undiagnosed illnesses attributed to their service in the Persian Gulf. Compensable conditions include but are not limited to abnormal weight loss, cardiovascular symptoms, fatigue, gastrointestinal symptoms, headaches, joint and muscle pains, menstrual disorders, neurologic symptoms, neuropsychological symptoms, skin disorders, respiratory disorders, and sleep disturbances. Under the procedures that VA established to process undiagnosed illness claims, veterans submit completed claim forms to a VA regional office (VARO). Each VARO is responsible for fully developing the claims. VAROs obtain medical records from the military services; arrange for a VA medical examination; and obtain evidence from other sources, such as private health care providers or knowledgeable lay persons, if the veteran identifies such sources. Once the claim is developed, the claims file is transferred to one of the four area processing offices that VA has designated for processing undiagnosed illness claims. As mentioned earlier, over 700,000 men and women served in the Persian Gulf War. VA reported that as of February 1996, it has processed 7,845 undiagnosed illness claims and has identified an additional 6,655 claims that are being evaluated for undiagnosed illnesses. Of the processed claims, VA had denied compensation for undiagnosed illness to 7,424 veterans—a denial rate of 95 percent. In February 1995, VA issued a regulation (38 C.F.R. 3.317) that specifies the evidence required before compensation can be paid for an undiagnosed illness claim. Under the regulation, veterans must provide objective indications of a chronic disability. Objective indications include both signs—evidence perceptible to the examining physician—and other nonmedical indications that are capable of independent verification. In the final rule, VA explained that nonmedical indicators of a disabling illness include but are not limited to such circumstances or events as (1) time lost from work; (2) evidence that a veteran has sought medical treatment for his or her symptoms; and (3) evidence affirming changes in the veteran’s appearance, physical abilities, or mental or emotional attitude. The evidence requirements contained in the regulation are consistent with the Joint Explanatory Statement that accompanied the Veterans’ Benefits Improvements Act of 1994. According to the VA regulation, a veteran can only be compensated for disabilities caused by undiagnosed illnesses that (1) manifest themselves during service in the Gulf War or (2) arise within 2 years of a veteran’s departure from the Persian Gulf. If the illness arose after the veteran left the Gulf, the veteran must be at least 10-percent disabled to be compensated. In addition, the veteran must demonstrate that the disabling condition is chronic—present for 6 months or longer. In some cases, lay statements can provide critical support for a veteran’s undiagnosed illness claim. As stated in the VA claims processing manual, lay statements may be especially important in cases where an undiagnosed illness is manifest solely by symptoms that the veteran reports and that would, therefore, not be subject to verification through medical examination. Examples of such symptoms include headaches and fatigue. According to VA, lay statements from individuals who establish that they are able from personal experience to make their observations or statements will be considered as evidence if they support the conclusion that a disability exists. While veterans are ultimately responsible for proving their claims, VA is required by statute to assist the veteran in developing facts to prove the claim. The U.S. Court of Veterans’ Appeals has also held in its decisions that VA has a duty to assist veterans with proving their claims and is required to obtain relevant facts from sources identified by claimants. A VA letter dated February 15, 1995, instructed all VA regional offices that “if a veteran alleges that a disability began after military service, request objective evidence (lay or medical) to establish that fact.” Many types of evidence can be used to support undiagnosed illness claims. The denied claims that we reviewed contained primarily service medical records and VA medical examinations. About 15 percent of the claims included medical records from private physicians seen by the veterans after leaving military service and less than 3 percent contained nonmedical evidence related to an undiagnosed illness, such as lay statements and records showing lost time from work. The granted claims that we reviewed also contained primarily service medical records and VA examinations. In these cases, however, veterans were usually able to provide VA with a history, after leaving the Persian Gulf, of treatment for the granted undiagnosed condition. Some granted claims were supported with nonmedical evidence, such as a sworn statement from an individual with knowledge of the veteran’s disability. Many of the veterans evaluated for undiagnosed illnesses are also examined for other diagnosable service-connected illnesses and injuries. While VA does not often grant compensation for undiagnosed conditions, these veterans often receive compensation for diagnosable injuries or illnesses. Of the cases that we reviewed where the claimed undiagnosed illness(es) had been denied, about 60 percent of the veterans had been granted compensation for at least one service-connected diagnosable condition, such as hypertension, hearing loss, or knee disorders. About one-half of these veterans were granted a disability payment; the remainder, with minor impairments, are eligible for free care for their conditions through the VA medical system. The lack of evidence to support undiagnosed illness claims may in part be the result of poor VA procedures to elicit such information, as the following examples indicate. In late 1995, VA’s central office conducted a review of 203 completed undiagnosed illness claims. VA found that additional specialty examinations should have been ordered in 23 cases (about 11 percent). At the time of our work, VA stated that the required examinations would be scheduled and the veterans’ cases would be reconsidered based on the additional evidence. In 5 of the 79 denied cases that we reviewed, VA had not requested records from physicians who had treated the veteran since leaving military service. For one case, VA officials stated that an attempt was made to obtain the evidence but the doctor failed to respond. In three cases officials stated that the medical records were not obtained due to error. According to area processing office officials, private medical records were not obtained in the other case because the veteran visited the doctor after the presumptive period. Although VA recognizes the importance of nonmedical objective evidence—for example, work records and lay statements from knowledgeable individuals—in supporting some undiagnosed illness claims, VA’s standard compensation claim form does not request such evidence. The form does ask veterans to identify individuals who know about the veteran’s medical treatment while in the service; in many cases, however, the claimed undiagnosed illness was not treated in the service. According to VA officials, the form was designed to obtain evidence about typical illnesses and injuries that usually occur while veterans are in the service as opposed to Persian Gulf illnesses that can become manifest after veterans leave military service. While the VA form does not specifically request nonmedical information, about 15 percent of the veterans did provide VA with the names of individuals who were knowledgeable about their claimed illness. However, VA did not obtain statements from these individuals. Officials at the area processing offices cited several reasons why lay statements were not obtained or used. These reasons include the veteran’s failure to provide a complete address for the knowledgeable individual and that the evidence fell outside the presumptive period. In one case, an area processing office official stated that VA should have obtained the statements. While the head of the claims processing unit at one area processing office questioned the value of lay statements and whether VA was responsible for obtaining them, VA central office officials acknowledged that VA was responsible for obtaining lay statements and a central office official told us that statements would be obtained for the cases that we identified and that the claims would be reconsidered after the statements were obtained. After the Congress passed legislation allowing compensation for undiagnosed illnesses, VA reexamined all completed Gulf War claim files to determine if compensation was warranted. In some of these cases that we reviewed, there was no indication that VA had informed the veteran after the legislation about the specific types of medical and nonmedical evidence that could be submitted to support the claim. According to VA officials, VA had decided to provide this information to the veterans on a case-by-case basis. VA’s central office acknowledged that the existing procedures to develop undiagnosed illness claims are not adequate and that area processing offices could do a better job of requesting both medical and nonmedical evidence from veterans in support of undiagnosed illness claims. VA has taken a step to provide better information to veterans regarding evidence to support undiagnosed illness claims. VA has developed a letter that clearly states the types of medical and nonmedical evidence that can be used to support these claims. VA is now sending this letter to all veterans who file undiagnosed illness claims. In the denied cases that we reviewed, even when VA followed all appropriate procedures to develop claims, the veterans did not always provide the necessary evidence that would allow their claims to be granted. Only 30 percent of the veterans in the denied cases that we reviewed provided evidence that they had sought medical treatment for the claimed undiagnosed condition after leaving the service—some veterans said that they could not afford medical treatment from private providers while others indicated that they were too busy to see a physician. About 40 percent of the veterans in the denied cases that we reviewed were informed that their denied undiagnosed illness claims would be reconsidered if additional evidence was submitted; and VA thoroughly described the evidence that would be acceptable. However, only 4 percent of these cases included any additional information from the veteran. Twenty-three percent of the veterans in the denied cases that we reviewed did not show up for all the scheduled examinations. As a result, VA was unable to identify and thoroughly evaluate the claimed disabling conditions. VA does not always correctly categorize the reason undiagnosed illness claims were denied. VA requires each of its area processing offices to record the reason that undiagnosed illness claims were denied. Reported results are compiled and presented periodically to the Congress. According to VA, most claims are denied because the claimed disability did not become manifest on active duty in the Persian Gulf or during the 2-year presumptive period. Table 1 shows the latest data submitted by VA. Of the denied claims that we reviewed, most—68 percent—had been categorized by VA as being denied because the claimed illness did not become manifest on active duty or during the presumptive period. However, in most of these cases, VA had explained in its decision letter to the veteran that insufficient evidence was presented to demonstrate that the claimed conditions existed, was chronic, or was disabling to a compensable degree of 10 percent or more. By failing to appropriately categorize denied claims, VA may be creating the impression that many veterans with otherwise compensable disabilities do not receive benefits solely as a result of the presumptive period. Our review suggests that if the presumptive period was extended, VA would still be required to deny the claims unless the veteran provided additional evidence regarding the chronic nature or disabling impact of the illness. VA officials acknowledged that their current reports could be misinterpreted. They told us that VA will assess the extent of the problem and take the necessary corrective action. We obtained comments on a draft of this report from VA officials, including the Deputy Under Secretary for Benefits. The officials generally agreed with our findings and noted that the agency is taking additional steps to address the concerns that we raised. Specifically, VA officials reiterated their commitment to providing veterans with better information regarding acceptable evidence to support undiagnosed illness claims and to more accurately categorize the reasons that claims are denied. The officials told us that VA’s central office will also undertake additional claims reviews to ensure that field offices are following all appropriate procedures. VA’s comments included some technical changes, primarily for clarification, which we incorporated in this report as appropriate. As arranged with your office, unless you announce its contents earlier, we plan no further distribution of this report until 7 days after the date of this letter. At that time, we will send copies to the Chairman, Senate Committee on Veterans’ Affairs; the Secretary of Veterans Affairs; and other interested parties. This work was performed under the direction of Irene Chu, Assistant Director, Health Care Delivery and Quality Issues. If you or your staff have any questions, please contact Ms. Chu or me on (202) 512-7101. Other major contributors to this report are listed in appendix II. To identify the evidence standards that VA established to process Persian Gulf War claims, we visited the VA central office in Washington, D.C., and two of the four area processing offices that VA designated as responsible for processing undiagnosed illness claims—Louisville, Kentucky, and Nashville, Tennessee (which together processed 72 percent of undiagnosed illness claims). We also conducted telephone discussions with officials at the other two area processing offices—Phoenix, Arizona, and Philadelphia, Pennsylvania. We also obtained pertinent documents and records from these offices. To obtain information about the undiagnosed illness disability compensation claims, we statistically sampled 79 of the 4,990 claims that VA had denied as of September 21, 1995. We randomly selected the claims from VA’s database of completed Persian Gulf War claims. Our sample size provides a 90-percent confidence level that the characteristics of our sample match the total population of denied claims within a specified error rate. The error rate was no greater than 11 percent. We also reviewed the claims files of 26 randomly selected veterans from the 273 whose claims for undiagnosed illnesses had been granted as of September 21, 1995. We selected four granted claims each from the Nashville, Louisville, and Philadelphia offices and 14 from the Phoenix office. We selected additional claims from the Phoenix office because it had processed 32 percent of all granted claims although it only processed 11 percent of all Persian Gulf claims. This was not a statistical sample; therefore, the results cannot be projected to the universe of granted claims. Instead, we reviewed these claims to allow a comparison with the denied claims. In conducting our samples we reviewed documents pertinent to our work, including the veterans’ application forms; letters from VA to the veterans about additional evidence; medical examinations; and rating statements with the letters containing VA’s decisions. Data about all Persian Gulf War illnesses and other information were abstracted from those documents and entered into a database for analysis. The purpose of our review of the denied and granted claim files was to identify the evidence contained therein and gain additional information on VA’s reasons and bases for denying or granting the claims. We made no effort to assess the appropriateness of VA’s decisions. We performed our review between August 1995 and March 1996 in accordance with generally accepted government auditing standards. Richard Wade, Evaluator-in-Charge Jon Chasson, Senior Evaluator Robert DeRoy, Assistant Director, Data Analysis and Evaluation Support Cynthia Forbes, Senior Evaluator Michael O’Dell, Senior Social Science Analyst Susan Poling, Assistant General Counsel Pamela A. Scott, Communications Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the procedures the Department of Veterans Affairs (VA) uses to process Persian Gulf War undiagnosed illness claims. GAO found that: (1) before VA will provide benefits, veterans must provide it with evidence of a chronic disability and verifiable evidence of time lost from work, prior medical treatment, or changes in appearance, physical abilities, or psychological condition; (2) both denied and approved claims consist primarily of service medical records and VA medical examinations, but approved claims usually include an independent medical history and sometimes include nonmedical evidence; (3) denied claims lacked sufficient evidence because of poor VA procedures and veterans' failure to collect relevant information; and (4) while VA reports that most denied claims were denied because the alleged disability did not become evident during active duty or the subsequent 2-year presumptive period, it stated in denial letters to veterans that their claims lacked sufficient evidence. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The Pioneer ACO Model’s overall goal is to improve the delivery of Medicare services by reducing expenditures while preserving or enhancing the quality of care for patients. Beginning in 2012, CMS contracted with ACOs for a 3-year period and subsequently offered a 2-year contract extension to ACOs that completed the first three years.The ACOs are expected to meet the goal of the model, in part, by coordinating the care they provide to Medicare beneficiaries and engaging beneficiaries in their own care. CMS designed the model for organizations with experience in providing coordinated care to beneficiaries at a lower cost to Medicare. Another goal of the Pioneer ACO Model is to help inform potential future changes to the agency’s permanent ACO program, the Medicare Shared Savings Program (MSSP), which began about 3 months after the Pioneer ACO Model. MSSP ACOs share less financial risk with CMS than Pioneer ACOs, as many are not responsible for paying CMS for any losses that they may generate during their contract period. CMS established eligibility requirements for participation in the Pioneer ACO Model through the request for applications and the contract between CMS and Pioneer ACOs. The requirements include the following: Organizational structure. ACOs must be structured to allow the organization to partner with groups of providers including ACO professionals such as physicians or physician assistants, to accept joint responsibility for the cost and quality-of-care outcomes for a specified group of patients. For example, ACOs may be structured as ACO professionals in a group practice or as partnerships between ACO professionals and hospitals. Care improvement plan. ACOs must implement a care improvement plan, as they described in their applications. These plans include a range of care strategies such as providing remote patient monitoring to beneficiaries with chronic illnesses and engaging beneficiaries through shared decision making. Beneficiary protections. ACOs must ensure that their providers and suppliers make all Medicare-covered services available to beneficiaries and that they do not inhibit beneficiaries’ freedom of choice to obtain health services from providers or suppliers not participating in the model. The ACOs annually provide CMS with a list of the providers and suppliers that have elected to participate as Pioneer providers or suppliers. Quality performance standards. ACOs must completely and accurately report quality data annually to CMS for 33 measures that address four quality domains. The four domains are (1) patient experiences of care, (2) care coordination and patient safety, (3) preventive health care, and (4) disease management for at-risk populations, such as beneficiaries with diabetes. ACOs must also meet performance standards for quality. In the first year (2012), CMS defined the quality performance standard as completely and accurately reporting all of the quality measures, regardless of the ACO’s scores on the measures. Beginning in 2013, CMS required that ACOs score a minimum level for at least 70 percent of the quality measures within each of the four quality domains. CMS determined a minimum performance level for each quality measure, based on performance benchmarks.quality apply to all participating ACOs. CMS’s oversight and evaluation responsibilities are broadly defined in the contract between CMS and the Pioneer ACOs and in regulation. CMS is responsible for monitoring beneficiary service use and investigating any unusual service use patterns to assess, for example, whether ACOs may be compromising beneficiary care. CMS is also responsible for monitoring whether ACOs may be avoiding at-risk beneficiaries. CMS may use a range of methods to conduct this monitoring, including analyzing beneficiary and provider complaints, and may investigate patterns suggesting that an ACO has avoided at-risk beneficiaries. In addition, CMS’s oversight role includes monitoring ACOs’ compliance with the quality performance standards. CMMI’s Seamless Care Models Group is responsible for carrying out the agency’s oversight responsibilities for the model. CMS is responsible for conducting an evaluation of the model’s financial and quality performance results and making the evaluation findings available to the public in a timely manner. conduct the evaluation and has chosen to focus the evaluation on eight research areas, based on a conceptual model outlining the pathways in which various factors can affect ACO performance results. The eight research areas are (1) Medicare service use and expenditures, (2) unintended consequences of ACOs, (3) beneficiary access to care, (4) ACOs’ care coordination activities, (5) quality of care, (6) health care markets served by ACOs, (7) ACO characteristics, and (8) ACO attrition. See 42 U.S.C. § 1315a(b)(4). The law does not specify a timeline for making these findings available to the public. Under this provision, CMS is responsible for evaluating models to test innovative payment and service delivery, such as the Pioneer ACO Model. Taking into account the evaluation findings, the Secretary of Health and Human Services may expand the duration and scope of the Pioneer ACO Model. Medicare beneficiaries are assigned by CMS to Pioneer ACOs based on their prior use of primary care services. CMS refers to this as “alignment.” ACOs are responsible for the annual expenditures of their aligned beneficiaries. CMS determines through an analysis of Medicare claims data which beneficiaries have received the plurality of their primary care services from primary care providers affiliated with an ACO in the prior three years. The ACO’s financial performance is based on the annual expenditures of its aligned beneficiaries for services covered by Medicare Parts A and B, which include hospital stays, outpatient services, physician visits, and skilled nursing facility (SNF) stays. To assess financial performance, CMS includes the expenditures for services provided by the ACO as well as by non-ACO Medicare providers since aligned beneficiaries may continue to obtain services from providers that are not affiliated with the ACO. Pioneer ACOs chose one of five payment arrangements with CMS that specified the type of risk sharing and the sharing rates, that is, the percentage of savings or losses that the ACO shared with CMS. The type of risk sharing is one- or two-sided. Under one-sided risk sharing, the ACO may receive a payment from CMS if it generates a minimum amount of savings but does not owe CMS a payment if it generates losses. In comparison, an ACO owes CMS a payment if it generates a minimum amount of losses under two-sided risk sharing, and is eligible to receive a payment from CMS if it produces savings. Four of the five arrangements required two-sided risk sharing in the first and second years; the other arrangement allowed for one-sided risk sharing, but only in the first year. Half of the ACOs (16 of 32) that participated in the first year selected the arrangement with one-sided risk sharing in the first year, and half (16 of 32) selected arrangements with two-sided risk sharing. The sharing rate specifies the maximum amount of savings that the ACO can share with CMS and the maximum amount of losses that the ACO may share with CMS. The sharing rates increase from the first to the second year in each of the payment arrangements. (See table 1.) CMS determines whether each Pioneer ACO has generated savings, losses, or neither by comparing the actual expenditures for their aligned beneficiaries for each year against their spending benchmarks. Each ACO’s spending benchmark is based on the baseline expenditures for the ACO’s aligned beneficiaries. Specifically, the spending benchmark incorporates each ACO’s actual expenditures for their aligned beneficiaries from 2009 to 2011 and the Medicare national growth rate. CMS subtracts the ACO’s expenditures for each year from the ACO’s spending benchmark, and if the ACO’s expenditures are lower than the benchmark by at least a minimum amount, the ACO has produced shared savings. In contrast, if the ACO’s expenditures exceed the benchmark by at least a minimum amount, the ACO has generated shared losses. CMS calculates a dollar amount for each ACO’s final annual payment if the ACO generates shared savings or losses. To perform this calculation, CMS multiplies the amount of shared savings or losses by the ACO’s final sharing rate. For shared savings, CMS calculates the final sharing rate by multiplying the ACO’s sharing rate by its total quality score. As a result, ACOs with higher total quality scores will have higher final sharing rates for savings and thus, will receive a higher portion of any shared savings. To calculate the final sharing rate for losses, CMS first adds 40 percent to the ACO’s sharing rate and then subtracts the product of the sharing rate and the ACO’s total quality score. quality scores will have lower final sharing rates for losses and thus, will owe CMS a lower portion of any shared losses. The total quality score is calculated with the 33 quality measures that the ACOs report to CMS each year. ACOs earn from 0 to 2 points for each measure, depending on their level of performance relative to the performance benchmarks CMS established. The total quality score is a percentage of the maximum number of points that an ACO can earn for the measures combined. The maximum total quality score is 100 percent. As an example of a final sharing rate for an ACO with savings, an ACO with a sharing rate of 50 percent and a quality score of 80 percent would have a final sharing rate of 40 percent (0.50 x 0.80 = 0.40). In this example, CMS would pay the ACO an amount equal to 40 percent of the shared savings it generated. (See fig. 1.) CMS applies a ceiling to this calculation equal to the sharing rate established in the ACO’s payment arrangement. Fewer than half of the ACOs that participated in the Pioneer ACO Model in the first two years earned shared savings in each year, although the ACOs overall produced net shared savings. The 23 ACOs that participated in the model both years had significantly higher quality scores in the second year than in the first year for 67 percent of the quality measures that they reported to CMS. Fewer than half of the ACOs that participated in the Pioneer ACO Model in 2012 and 2013—the first two years of the model—earned savings that were shared with CMS. Of the 32 ACOs that participated in 2012, 13 (about 41 percent) produced about $139 million in total shared savings. Of the 23 ACOs that participated in 2013, 11 (48 percent) produced about $121 million in total shared savings. The amount of shared savings that the 13 ACOs produced in 2012 ($139 million) and the amount the 11 ACOs produced in 2013 ($121 million) each represent about 4 percent of the total expenditures for the ACOs that produced shared savings in each year. The average amount of shared savings that the ACOs produced each year was about $11 million (per ACO with shared savings). CMS provided payments to these ACOs for about 56 percent of the total shared savings each year. For example, in 2012, CMS paid 13 ACOs $77 million of the $139 million that they produced in shared savings. The average payment amount that CMS made to ACOs that produced shared savings was about $6 million in each year. One of the 32 Pioneer ACOs (3 percent) that participated in the first year produced losses that were shared with CMS, and 6 of the 23 participating ACOs (26 percent) produced shared losses in the second year. The total amount of shared losses that the ACO produced in 2012 was $5.1 million, and in 2013 the 6 ACOs produced about $23 million in total shared losses. On average, ACOs with shared losses in 2013 produced $3.8 million each in shared losses, with a range from $2.2 million to $6.3 million. In 2013, ACOs with shared losses paid or are expected to pay CMS about $11 million, an amount equal to about 48 percent of the $23 million in shared losses that they produced. The remaining ACOs did not produce shared savings or shared losses in either year. Eighteen of the 32 ACOs (56 percent) did not produce shared savings or losses in 2012.losses in 2013. (See table 2.) The 23 ACOs that participated in the Pioneer ACO Model in both 2012 and 2013 had significantly higher quality scores in the second year than in the first year for two-thirds of the quality measures (22 of the 33, or 67 percent) that they reported to CMS. We observed significantly higher scores for measures in each of the four quality domains: (1) patient experiences of care; (2) care coordination and patient safety; (3) preventive health care; and (4) disease management for at-risk populations. ACOs demonstrated the most improvement in the disease management for at-risk populations’ domain. That is, we found that the ACOs had higher scores in 2013 than in 2012 for 83 percent of the measures (10 of the 12) in this domain. For example, ACOs increased the percentage of beneficiaries with a diagnosis of hypertension whose blood pressure was adequately controlled, from about 65 percent in 2012 to 74 percent in 2013. We observed no significant differences between ACOs’ scores in 2012 and 2013 for 10 of the 33 quality measures (30 percent), but we found a statistically significant decline in quality for one measure. Specifically, the rate of hospital admissions for beneficiaries with congestive heart failure was higher in 2013 than in 2012. Table 3 shows the average quality scores in 2012 and 2013 and the quality measures for which we observed significant differences in scores from 2012 to 2013. (See app. I for a summary of the distribution of quality scores in 2012 and 2013.) CMS oversees Pioneer ACOs by monitoring the service use of their aligned beneficiaries and the quality of care provided by the ACOs, and by investigating provider and beneficiary complaints about ACOs. As provided for by law, CMS has reported its evaluation findings publicly for the first year of the Pioneer ACO Model in 2013, and these findings addressed two of the eight research areas that CMS established for the evaluation. CMS oversees Pioneer ACOs by monitoring the service use of their aligned beneficiaries, pursuant to the contract between CMS and ACOs and CMS regulation. CMS monitors beneficiaries’ use of services quarterly by reviewing the expenditure and utilization reports that a CMS contractor produces for each ACO, according to CMS officials. The reports include the baseline expenditures for each ACO and expenditures by the type of services that the ACO’s aligned beneficiaries have received, such as physician and SNF services. As of February 2015, CMS officials indicated that they had examined two reports about potentially discrepant trends in beneficiaries’ use of services. In one case, an ACO raised a concern with CMS that its negative financial performance in the first year did not reflect the actual service use of its aligned beneficiaries. CMS investigated the service use for the beneficiaries aligned to this ACO and observed a sharp increase in expenditures during one time period. CMS officials consulted with the agency’s Office of the Actuary to further investigate this trend and determined that a national claims processing error had occurred, but that the correction had not been implemented properly in the affected ACO’s geographic region. CMS officials and its contractors corrected the error, and determined that the error did not affect other ACOs in the region. In the second case, an ACO stated that the service use data included in an expenditure and utilization report for the first year of the model could be inaccurate. The ACO believed the data were inaccurate because the service use in the report was higher than the service use of aligned beneficiaries as tracked by the ACO. CMS officials investigated expenditures over time and by service type for the ACO’s beneficiaries, compared its expenditures to state and national populations, and determined that the ACO’s beneficiaries had a significant increase in SNF service use. The analysis the ACO had presented to CMS included inpatient service use but not SNF use, according to CMS officials. CMS also oversees Pioneer ACOs by monitoring their compliance with the model’s quality performance standards, consistent with the contract between CMS and the ACOs and CMS regulation. CMS officials review the annual quality reports that a CMS contractor produces for each ACO, according to agency officials. The quality reports include information about the ACO’s performance for each of the 33 quality measures and state whether the ACO achieved the minimum performance standard in each of the four quality domains. CMS determined that one ACO did not meet the quality performance standards in the second year of the model, because it did not meet the minimum standard in the care coordination and patient safety domain. The ACO achieved a score of 40 percent for this quality domain instead of the required minimum score of 70 percent. As a result, CMS required the ACO to submit a corrective action plan to CMS. The plan, provided to CMS in October 2014, outlines steps the ACO will take to ensure future compliance with the quality standards, according to CMS officials. CMS and the ACO discussed and reviewed the submitted corrective action plan in November 2014. CMS officials told us they also review the performance levels for the quality measures to assess whether ACOs may have compromised beneficiary care. That is, they compare the ACOs’ scores to the benchmarks for each of the individual quality measures to evaluate the ACOs’ performance. For example, each ACO scored over 80 out of 100 in 2013 for the measure reflecting access to specialists—such as surgeons and cardiologists. Further, each ACO’s quality score fell into the two highest performance levels, according to CMS’s benchmarks. CMS also investigates complaints about Pioneer ACOs that the agency receives from Medicare beneficiaries and providers as part of its monitoring efforts. As of February 2015, CMS officials indicated that it had completed or had begun investigating three complaints. CMS has completed its investigation of a provider complaint that it received from the Department of Health and Human Services’ Office of Inspector General in March 2014. In this case, according to CMS officials, a provider alleged that an ACO was inhibiting beneficiaries’ choice of home health providers. CMS officials spoke with the ACO in June 2014 and determined that the complaint was unsubstantiated. CMS made this determination after the ACO demonstrated that it had comprehensive procedures in place to avoid restricting beneficiaries’ choice of home health providers. CMS is currently investigating two other complaints, one from a beneficiary and the other from a provider. In the first case, CMS received a beneficiary complaint in August 2014 in which the beneficiary alleged that an ACO stinted on care and provided inadequate medical care. CMS officials stated that they are coordinating with representatives from a CMS regional Quality Improvement Organization and CMS’s Center for Program Integrity to investigate this complaint, including conducting a full medical chart review. In the second case, CMS is investigating a provider complaint from a SNF alleging that an ACO had placed undue pressure on the SNF to participate in the ACO. CMS officials met with the trade association that submitted the complaint on behalf of the SNF in September 2014, and a CMS contractor has initiated discussions with other SNFs that are affiliated with the ACO under investigation. Through these discussions, CMS officials indicated that they plan to determine whether the ACO misrepresented any information about the Pioneer ACO Model. CMS officials told us that they occasionally receive general queries related to Pioneer ACOs from their regional offices and have asked staff in the regional offices to investigate the queries. Based on its monitoring efforts, CMS has no substantiated evidence suggesting that beneficiary care has been compromised, as of February 2015. For example, CMS has not determined that ACOs have stinted on the care that they provide to beneficiaries or have avoided providing care to at-risk beneficiaries. As provided for by law, CMS has reported its evaluation findings publicly for the first year of the Pioneer ACO Model. The reported findings addressed two of the eight research areas that CMS established for the evaluation—Medicare service use and expenditures and ACO characteristics. CMS issued a public report in November 2013 that included findings related to these two research areas. For example, CMS reported that none of the ACO characteristics it tested, such as organization type, was significantly related to an ACO’s ability to reduce expenditures in the first year of the model, and that most of the ACOs that reduced expenditures had higher Medicare expenditures than their comparison groups prior to the start of the Pioneer ACO Model. CMS planned to issue the report in the summer of 2013, and intended to include results for more of the research areas, according to agency officials. However, the release of the report was delayed until November 2013 because of delays in securing the CMS contractor’s access to Medicare claims data. The delay also limited the scope of the findings for which CMS could report, according to CMS officials, and these data access issues have since been resolved. In 2015, CMS also plans to report additional findings for the first year of the model. For example, CMS plans to report findings related to quality of care. beneficiaries, (2) access to care, (3) beneficiary quality of care, (4) health care markets, and (5) ACO characteristics. CMS officials added that although they have not made such findings public, they have shared preliminary second year findings internally for five of the eight research areas and their analysis is ongoing for the other three research areas. (See table 4.) The Department of Health and Human Services (HHS) reviewed a draft of this report and provided written comments, which are reprinted in appendix II. In its comments HHS emphasized the Pioneer ACO Model’s goal to reduce Medicare costs while providing beneficiaries better care through greater care coordination. HHS also provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. This appendix presents information on the distribution of scores for the 33 quality measures that 23 Pioneer ACOs reported to CMS in 2012 and 2013. (See table 5.) We used the Wilcoxon signed-rank test, a nonparametric test, to analyze the differences in ACOs’ quality scores from 2012 to 2013. The signed-rank test determines whether the differences between the median scores for the 2 years are statistically significant. In addition to the contact named above, Martin T. Gahart, Assistant Director; Yesook Merrill, Assistant Director; George Bogart; Pamela Dooley; Toni Harrison; and Roseanne Price made key contributions to this report. | ACOs were established in Medicare to provide incentives to physicians and other health care providers to better coordinate care for beneficiaries across care settings such as doctors' offices, hospitals, and skilled nursing facilities. The Pioneer ACO Model was established as a result of the Patient Protection and Affordable Care Act of 2010 creating CMMI in CMS to test new models of service delivery in Medicare. Thirty-two ACOs joined the model in 2012, the first year. Under the model, CMS rewards ACOs that lower their growth in health care spending while meeting performance standards for quality of care. GAO was asked to review the results of the Pioneer ACO Model and CMS's oversight of the ACOs. In this report GAO (1) describes the financial and quality results for the first two years of the model and (2) examines how CMS oversees and evaluates the model. To do this work, GAO analyzed data from CMS on the financial and quality results for each ACO for 2012 and 2013 (the first two years of the model). GAO analyzed ACOs' expenditures, spending benchmarks, the amount of shared savings and losses, and payment amounts for shared savings or losses. GAO also reviewed relevant laws, regulations, and documents describing CMS's oversight and evaluation role and interviewed CMS officials about the agency's oversight and evaluation activities. Health care providers and suppliers voluntarily form accountable care organizations (ACO) to provide coordinated care to patients with the goal of reducing spending while improving quality. Within the Centers for Medicare & Medicaid Services (CMS), the Center for Medicare & Medicaid Innovation (CMMI) began testing the Pioneer ACO Model in 2012. Under this model, ACOs can earn additional Medicare payments if they generate savings, which are shared with CMS, but must pay CMS a penalty if their spending is higher than expected. ACOs must report quality data to CMS annually and meet quality performance standards. GAO found that fewer than half of the ACOs earned shared savings in 2012 and in 2013, although overall the Pioneer ACO Model produced net shared savings in each year. Specifically, Forty-one percent of the ACOs produced $139 million in total shared savings in 2012, and 48 percent produced $121 million in total shared savings in 2013. In 2012 and 2013 CMS paid ACOs $77 million and $68 million, respectively, for their shared savings. The Pioneer ACO Model produced net shared savings of $134 million in 2012 and $99 million in 2013. GAO also found that ACOs that participated in both years had significantly higher quality scores in 2013 than in 2012 for 67 percent of the quality measures. CMS oversees the use of Medicare services by beneficiaries receiving their care from ACOs and the quality of care that ACOs provide, consistent with the contract between CMS and ACOs and CMS regulation, and has reported publicly on findings from its evaluation of the model. CMS reviews reports on each ACO's service use, expenditures, and quality performance and investigates complaints about ACOs. As of February 2015, CMS officials said the agency had investigated two potentially discrepant trends in service use. CMS determined that one ACO did not meet the quality performance standards in 2013, and, as a result, CMS is requiring it to implement an action plan to ensure future compliance. Based on its monitoring efforts, CMS has no substantiated evidence suggesting that beneficiary care has been compromised, as of February 2015. CMS reported publicly on its evaluation findings, as provided for by law, in 2013. CMS included in this initial report findings related to Medicare service use and expenditures and ACO characteristics—two of the eight research areas that it established for the evaluation. CMS officials told GAO that the agency has shared preliminary findings within CMS for five of the six remaining areas and that it plans to report publicly on additional findings in 2015. In commenting on a draft of this report, the Department of Health and Human Services (HHS) emphasized the overall goal of the Pioneer ACO Model. HHS also provided technical comments that GAO incorporated as appropriate. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Wastewater systems in the United States provide essential services to residential, commercial, and industrial users by collecting and treating wastewater and discharging it into receiving waters. In light of the events of September 11, 2001, Congress and the executive branch have placed increased attention on improving the security of the nation’s water infrastructure—including wastewater systems—to protect against future terrorist threats. While more federal resources have been directed toward drinking water security than wastewater security, some maintain that wastewater systems, like drinking water systems, also possess vulnerabilities that could be exploited. The unique characteristics and components these systems possess provide for the efficient collection, treatment, and disposal of wastewater—functions that are vital to the health of the general public and the environment. However, many of these same characteristics and components have been identified as potential means for carrying out a terrorist attack. A terrorist could seek to impair a wastewater system’s treatment process, to use a wastewater system to carry out an attack elsewhere, or some combination of both. Documented accidents and intentional acts highlight the destruction that arises from an attack on a wastewater system. For example, in June 1977 in Akron, Ohio, an intentional release of naptha, a cleaning solvent, and alcohol into a sewer by vandals at a rubber manufacturing plant caused explosions 3.5 miles away from the plant, damaging about 5,400 feet of sewer line and resulting in more than $10 million in damage. A majority of the nation’s wastewater is treated by publicly owned treatment works (POTW) that serve a variety of customers, including private homes, businesses, hospitals, and industry. These POTWs discharge treated water into surface waters and are regulated under the Clean Water Act. Nationwide, there are over 16,000 publicly owned wastewater treatment plants, approximately 800,000 miles of sewers, and 100,000 major pumping stations. This infrastructure serves more than 200 million people, or about 70 percent of the nation’s total population. The remainder is served by privately owned utilities or by on-site systems, such as septic tanks. This report addresses both public and private wastewater systems. Though outnumbered by the small systems, the relative handful of large wastewater systems serve the great majority of people. As depicted in figure 2, only 3 percent of the nation’s total wastewater systems (approximately 500 systems) provide service to 62 percent of the populations served by POTWs. Each of these systems treats more than 10 million gallons per day (MGD) of wastewater. Wastewater systems vary by size and other factors but, as illustrated in figure 3, all include a collection system and treatment facility. The underground network of sewers includes both sanitary and storm water collection lines that may range from 4 inches to greater than 20 feet in diameter. Storm water lines tend to be large in diameter in order to accommodate a variety of precipitation events. Some of the nation’s older cities have combined sanitary and storm water lines. Sewers are connected to all buildings and streets within typical communities through indoor plumbing and curb drains. Most systems were designed for easy and frequent access to facilitate maintenance activities. Access for these purposes is usually conducted through manholes that are typically located approximately every 300 feet. Many collection systems rely on gravity to maintain the flow of sewage through the pipes toward the treatment plant. However, the geographic expanse of a collection system, both in size and topography, may impede the flow. For this reason, collection systems may depend on pumping stations to lift the flow to gain elevation for continued gravity flow until the wastewater reaches the wastewater treatment plant. Once the wastewater enters the treatment plant (influent) through the collection system, the treatment process removes contaminants such as organic material, dirt, fats, oils and greases, nitrogen, phosphorus, and bacteria. The influent typically undergoes several stages of treatment before it is released. Primary treatment includes the removal of larger objects, such as rags, cans, or driftwood, through a screening device or a grit removal system, and solids are removed through sedimentation. Secondary treatment includes a biological process that consumes pollutants, as well as final sedimentation. Some facilities also use tertiary treatment to remove nutrients and other matter even further. Following secondary or tertiary treatment, the wastewater is disinfected to destroy harmful bacteria and viruses. Disinfection is often accomplished with chlorine, which is stored on-site at the wastewater treatment plant. The collection and treatment process is typically monitored and controlled by a Supervisory Control and Data Acquisition (SCADA) system, which allows utilities to control such things as the amount of chlorine needed for disinfection. In December 2003, the President issued Homeland Security Presidential Directive-7 (HSPD-7), which established a national policy for federal departments and agencies to identify and set priorities for the nation’s critical infrastructures and to protect them from terrorist attacks. HSPD-7 established the Environmental Protection Agency (EPA) as the lead federal agency to oversee the security of the water sector, both drinking water and wastewater. Presidential Decision Directive 63 had done so earlier in May 1998, with a focus primarily on drinking water. Based on the 1998 directive, EPA and its industry partner, the Association of Metropolitan Water Agencies (AMWA) established a communication system, the Water Information Sharing and Analysis Center (Water ISAC). The Water ISAC was designed to provide real-time alerts of possible terrorist activity and access to a library of information and contaminant databases to water utilities throughout the nation. In fiscal year 2004, Congress appropriated $2 million for the Water ISAC, which today serves more than 1,000 users from water and wastewater systems. In November 2004, the Water ISAC launched a free security advisory system known as the Water Security Channel to distribute federal advisories on security threats via e-mail to the water sector. EPA recently established a Water Security Working Group to advise the National Drinking Water Advisory Council (NDWAC) on ways to address several specific security needs of the sector. The working group is made up of 16 members selected on the basis of experience, geographic location, and their unique drinking water, wastewater, or security perspectives. It represents a diverse collection of drinking water and wastewater utilities of all sizes, state and local public health agencies, and environmental and rate- setting organizations. The group's charge includes making recommendations to the full council by the spring of 2005 that identify features of an active and effective security program and ways to measure the adoption of these practices. The working group is also charged with identifying incentives for the voluntary adoption of an active and effective security program in the water and wastewater sector. The Department of Homeland Security (DHS) is also seeking to enhance communication between critical infrastructure sectors, like the water sector, with the government. The Homeland Security Information Network (HSIN) is being developed to provide the water sector with a suite of information and communication tools to share critical information both within the sector, across other sectors, and with DHS. According to DHS, these information and collaboration tools will facilitate the protection, stability, and reliability of the nation’s critical water infrastructure and provide threat-related information to law enforcement and emergency managers on a daily basis. A Water Sector Coordinating Council established by the department with representative members of the water sector community is charged with identifying information and other needs of the sector, including the appropriate use of and the relationships among ISAC, the Water Security Channel, and HSIN. According to a DHS official, the department is also assembling a Government Coordinating Council made up of federal, state, and local officials to assess impacts across critical infrastructure sectors, including the water sector. While federal law does not address wastewater security as comprehensively as it addresses drinking water security, wastewater utilities have taken steps, both in concert with EPA and on their own, to protect their critical components. Since 2002, EPA has provided more than $10 million to help address the security needs of the wastewater sector. A large portion of this funding has been awarded to nonprofit technical support and trade organizations including the Association of Metropolitan Sewerage Agencies (AMSA) and the Water Environment Federation to develop tools and training on conducting vulnerability assessments to reduce utility vulnerabilities and on planning for and practicing response to emergencies and incidents. Also, according to EPA, because of the relationship between the drinking water and wastewater sectors, much of the work and funding that has been allocated for drinking water security also directly benefits the wastewater sector. The Water Environment Research Foundation, for instance, has been conducting research on cyber security, real-time monitoring, the effects of contaminants on treatment systems, and other topics that could benefit both sectors. In addition, EPA has supported the development of a variety of resource documents for utilities such as guidance on addressing threats and security product guides for evaluating available technologies and has offered additional technical support to small systems. To assist in the completion of vulnerability assessments, AMSA with EPA funding cited above, developed technical assistance documents and software including the Vulnerability Self Assessment Tool (VSAT) that are available free of charge to water and wastewater systems. The VSAT methodology and software offers utilities a structured approach for assessing their vulnerabilities and establishing a risk-based approach to taking desired actions. Even though the wastewater industry has not been required by law to undertake the security measures undertaken by drinking water utilities, many in the industry maintain that enhanced security must be pursued. They note, however, that the implementation of security measures imposes additional financial costs on a sector that is already experiencing difficulty in meeting the financial challenges of an aging infrastructure. Accordingly, the industry has sought federal assistance through the congressional appropriations process. In 2003, Congress responded by considering legislation that would have authorized $200 million for use in making grants to wastewater utilities to conduct vulnerability assessments and implement security improvements, $15 million for technical assistance for small systems, and $5 million over 5 years for refinement to vulnerability assessment methodologies. As requested by the Chairman and Ranking Minority Member of the Senate Committee on Environment and Public Works, this report identifies experts’ views on the following questions: What are the key security-related vulnerabilities affecting the nation’s wastewater systems? What specific activities should the federal government support to improve wastewater security? What are the criteria that should be used to determine how federal funds are allocated among recipients to improve wastewater security, and how should the funds be distributed? It was outside the scope of this review to ascertain the desirability of using federal funds to support wastewater security or to compare the merits of federal support of the wastewater industry with others such as the electric power or transportation industries. Rather, we sought to obtain expert advice on how best to use federal funds to improve wastewater security, should Congress agree that they should be appropriated for this purpose. To obtain information on these three questions, we conducted a three- phase Web-based survey of 50 experts on wastewater security. We identified these experts from a list of more than 100 widely recognized experts in one or more key aspects of wastewater security. In compiling this initial list, we also sought to achieve balance in terms of area of expertise (i.e., state and local emergency response, preparedness, engineering, epidemiology, public policy, security, wastewater treatment, risk assessment, water infrastructure, bioterrorism, and public health). In addition, we sought experts from (1) key federal organizations (e.g., DHS, EPA, and National Science Foundation); (2) key state and local agencies, including health departments and environmental protection departments; and (3) key industry and nonprofit organizations such as AMSA, Environmental Defense, Water Environment Federation, and the Water Environment Research Foundation; and (4) water utilities serving populations of varying sizes. Of the approximately 70 experts we contacted, 50 agreed to participate and complete all three phases of our survey. A list of the 50 participants in this study is included in appendix I. To obtain information from the expert panel, we employed a modified version of the Delphi method. The Delphi method is a systematic process for obtaining individuals’ views and seeking consensus among them on a question or problem of interest. Since first developed by the RAND Corporation in the 1950s, the Delphi method has generally been implemented using face-to-face group discussions. For this study, however, we adapted the method to use on the Internet. We used this approach, in part, to eliminate the potential bias associated with group discussions. These biasing effects include the dominance of individuals and group pressure for conformity. Moreover, by creating a virtual panel, we were able to include many more experts than possible with a live panel, allowing us to obtain a broad range of opinions. For each phase in our three-phase Delphi process, we posted a questionnaire on GAO’s survey Web site. Panel members were notified of the availability of the questionnaire with an e-mail message. The e-mail message contained a unique user name and password that allowed each respondent to log on and fill out a questionnaire but did not allow respondents access to the questionnaires of others. In the survey’s first phase, we asked a series of open-ended questions. We pretested these questions with officials from the wastewater utility industry, nonprofit research groups, and a federal agency. Responses were content analyzed to provide the basis for the questions asked in the subsequent phases. Phase 2 questions were close-ended and asked experts to rate the relative priority or effectiveness of the Phase 1-identified security activities, allocation criteria, and funding mechanisms. Experts were also invited to provide narrative comments. During the third phase, we provided experts with aggregate group results from Phase 2, along with their own individual answers to the Phase 2 questionnaire. Experts were asked to compare the group results with their own individual answers and to use this information as a basis for reconsidering their answers and revising their individual responses, if so desired. In addition to the information obtained from our expert panel, we obtained documentation from representatives of professional organizations, such as the National Academy of Sciences, the Water Environment Research Foundation, and AMSA. We also held interviews with EPA on the agency’s wastewater security programs. During our interviews, we asked officials to provide information on program operations, policies, guidance, and funding levels. We also received training on VSAT from the Water Environment Federation, which was supported by AMSA, and attended specialized conferences addressing water security by the American Water Works Association and other organizations. We conducted our work from January 2004 through December 2004 in accordance with generally accepted government auditing standards. Experts responding to our survey identified five key physical assets of wastewater systems as among the most vulnerable to terrorist-related attacks: (1) the collection systems’ network of sewers, which includes underground sanitary, stormwater and combined sewer lines; (2) treatment chemicals, primarily chlorine, which are used to disinfect wastewater; (3) key components of the wastewater treatment plant, such as its headworks, where the raw sewage first enters the treatment plant; (4) control systems, used to control plant operations; and (5) pumping stations along the collection system, which lift or pump wastewater to allow gravity flow to help move sewage to the treatment plant (see fig. 4). Of these assets, experts ranked the collection systems’ network of sewers and treatment chemicals as the most vulnerable. Experts also identified overarching vulnerabilities that could compromise the overall integrity of the systems’ security. These vulnerabilities include (1) a general lack of security awareness within the wastewater sector; (2) interdependencies among components of the wastewater system, opening the possibility that a failure of any individual component could bring down the entire system; and (3) interdependencies between the wastewater system and other critical infrastructure that could fail, such as electric power supplies. In general, our panel of experts’ observations were consistent with those of major organizations that have conducted research on wastewater system vulnerabilities. Among these organizations are the Water Environment Federation and the Association of Metropolitan Sewerage Agencies. The five assets experts considered most vulnerable included the collection systems’ network of sewer lines, treatment chemicals, key components of the wastewater treatment plant, control systems, and pumping stations. Forty-two of the 50 experts we surveyed identified the collection systems’ network of sanitary, storm, and combined sewer lines as among the top five terrorist-related vulnerabilities of wastewater systems. Experts explained that adversaries could use the network of sewers to (1) covertly gain access to intended targets within the service area or to (2) convey hazardous or flammable substances that may cause explosions at points along the system or cause harm to the wastewater treatment system or process. Access controls to important installations, such as perimeter fencing, can be countered by a terrorist gaining access to the facility unseen by using the underground collectors. Once access is gained, any activity could then occur—target reconnaissance or surveillance, planting of conventional explosives or weapons of mass destruction, hostage taking, theft of critical documents and items. Many experts also suggested that adversaries could use the collection system as an underground transport system—without ever physically entering the system—for explosive or toxic agents. These substances could be inserted into the system through storm drains, manholes, or household drains. Several experts explained that with prior knowledge of a system’s gravity flow, an adversary could calculate the precise timing and location of an explosion or calculate the amount of a substance that might be necessary to disable or destroy the biological processes of a wastewater treatment plant. However, even without precise knowledge about a system, significant damage can occur as a result of underground sewer explosions. These explosions may also damage natural gas or electric lines often co-located with sewers. One expert cited the effects of an unintentional explosion that occurred in 1981 in Louisville, Kentucky, where thousands of gallons of a highly flammable solvent, hexane, spilled into the sewer lines from a local processing plant. The fumes created an explosive mixture that was eventually ignited by a spark from a passing car. The result was a series of explosions that collapsed a 12-foot diameter pipe and damaged more than 2 miles of streets. While no one was seriously injured, sewer line repairs took 20 months, followed by several more months to repair the streets. A more serious incident occurred in Guadalajara, Mexico, when a gasoline leak into a sewer, in April 1992, caused explosions that killed 215 people, injured 1,500 others, damaged 1,600 buildings, and destroyed 1.25 miles of sewer. The explosion created craters as deep as 24 feet and as large as 150 feet in diameter. Another alarming incident was an intentional release of a cleaning solvent (naptha) and alcohol into a sewer that caused explosions 3.5 miles away from the source and damaged about 5,400 feet of sewer line. This June 1977 incident in Akron, Ohio, by vandals at a rubber manufacturing plant resulted in more than $10 million in damage. Adversaries may also use the system to convey substances that disable the treatment process. For example, as one expert explained, an adversary could introduce a highly toxic chemical into the sewer that could damage the biological processes involved in treatment. Several experts warned that disabling the treatment process could cause the release of improperly treated sewage, placing the receiving water in jeopardy and potentially harming human health and the environment. In February 2002, such an incident occurred in Hagerstown, Maryland, when chemicals from an unknown source entered the wastewater treatment plant and destroyed the facility’s biological treatment process. This incident resulted in the discharge of millions of gallons of partially treated sewage into a major tributary of the Potomac River, less than 100 miles from a water supply intake for the Washington, D.C., metropolitan area. Thirty-two of the 50 experts we surveyed identified process chemicals used in wastewater treatment as among the top five terrorist-related wastewater system vulnerabilities. Wastewater treatment facilities use a variety of chemicals, including chlorine, sulfur dioxide, and ammonia during the treatment process. Most experts singled out chlorine gas as a major chemical of concern because it is an extremely volatile and hazardous chemical that requires specific precautions for its safe transport, storage, and use. Chlorine is a disinfectant that is commonly used in the treatment process before treated water (effluent) is discharged into local waterways. However, if chlorine, which is stored and transported as a liquefied gas under pressure, is accidentally released into the atmosphere, it quickly turns into a potentially lethal gas. Because gaseous chlorine is heavier than air, the cloud it forms tends to spread along the ground. Consequently, accidental or intentional releases of chlorine could be extremely harmful to those in the immediate area. Exposures to chlorine could burn eyes and skin, inflame the lungs, and could be deadly if inhaled. One expert pointed out that accidental releases of chlorine gas have occurred numerous times and that a deliberate release would be relatively feasible. The expert further explained that many wastewater plants have been converting from chlorine gas to alternative disinfection methods for various reasons, including the risk of a release. Recognizing that chlorine gas releases pose threats to the public and the environment, EPA requires, among other things, that any facility storing at least 2,500 pounds of chlorine gas submit a risk management plan; as of December 2004, EPA estimates that about 1,200 plants fit this category. The plan includes an estimate of the potential consequences to surrounding communities of hypothetical accidental “worst-case” chemical releases from their plants. These estimates include the residential population located within the range of a toxic gas cloud produced by a “worst-case” chemical release, called the vulnerable zone. Several experts stated that a terrorist could use chlorine gas as a weapon, either at a wastewater plant that is in close proximity to a specific target population, or through theft and use at another location. In fact, on September 11, 2001, railroad tanker cars filled with toxic chemicals including chlorine sat at a treatment plant across the river from the Pentagon as it was being attacked. At that time, the population within the plant’s vulnerable zone was 1.7 million people. Within weeks after September 11, this facility converted to an alternative disinfection method. Other facilities have also eliminated the use of chlorine gas, choosing instead chlorine-based technologies (e.g., sodium hypochlorite, calcium hypochlorite, mixed oxidant generation) or nonchlorine-based technologies (e.g., ozone and ultraviolet light). However, as one expert noted, several dozen wastewater treatment plants in heavily populated areas continue to use large amounts of chlorine gas. In addition to concerns over on-site chlorine storage, experts were also concerned about the safe transport of chemicals to treatment facilities. Chlorine is delivered to facilities via railways and highways and in various container sizes ranging from 1-ton cylinders to 90-ton railroad cars (see figs. 5 and 6). As experts noted, although rail tank cars are designed to avoid leakage in the event of a derailment, and the containers can theoretically withstand a bullet from a normal handgun or rifle, one expert concluded that the “use of explosives to cause a rupture is well within the skill set of a terrorist.” Such an attack along a congested transportation corridor could have severe public health and safety impacts. One expert said that before converting from chlorine to alternative disinfection methods, a major wastewater treatment plant in Washington, D.C., received its chlorine supply via rail shipments that traversed through the center of the city, close to the U.S. Capitol Building and across two military installations before reaching its final destination. Derailments of chlorine could have major impacts in small communities as well, as occurred in Alberton, Montana, in April 1996. One of the five tankers that derailed ruptured and reportedly released more than 60 tons of chlorine. Subsequently, a toxic plume of chlorine gas crossed the Clark Fork River, a major interstate, and surrounding residences. An estimated 1,000 people were evacuated, 350 people were hospitalized, and one person died. In addition to the vulnerability of chemicals stored at a wastewater treatment plant, experts also listed the key process components of the treatment plant as vulnerable. Specifically, more than half of the experts (29 of 50) identified one or more of these components as among the top five vulnerabilities. One expert explained that, historically, security was not a consideration in site selection or design of these facilities. While many utilities planned for natural disasters or vandalism, it was only after September 11, that many utilities have considered how best to protect against potential terrorist attacks. While experts expressed concern over the security of the entire treatment plant, several identified the headworks as a component that is particularly vulnerable to attack, as well as critical to the treatment process. This unit is part of a plant’s primary treatment process, where wastewater carried through the collection system first enters the treatment plant. It is here that large objects, such as cans, wood, and plastics are removed from the wastewater stream. These structures may be open to the atmosphere and, according to one expert, are easy to attack. Experts explained that sabotage of the headworks could affect the proper working order of subsequent treatment processes and could cause the immediate interruption of the collection system, potentially restricting or completely blocking wastewater flow. As one expert noted, restricted flow would could cause backups through the collection system, and the stagnant wastewater would become a public health hazard within hours, either through physical contact or through cross-contamination of drinking water supplies. Control systems were also listed as a key vulnerability by 18 of the 50 experts. Many wastewater systems are increasingly relying on the use of these control systems, including Supervisory Control and Data Acquisition (SCADA) networks, to serve functions ranging from storing and processing data to monitoring the system’s condition and controlling its operation. The primary role of SCADA systems is to monitor and control dispersed assets from a central location. According to one expert, “The backbone for process control is the SCADA system.” The expert explained that several factors contribute to the vulnerability of these controls, including typically nonsecured process control rooms at treatment plants, remote access to SCADA, and shared passwords between multiple users. Experts generally explained that an attack on these systems could interfere with critical operations. For example, one expert explained that an adversary could use SCADA systems to introduce either dangerously high or inadequate levels of chemicals; reduce biological treatment levels; or cause remote points along the collection system to fail. Although some facilities could operate their systems manually should the automated system fail or be compromised, others do not have the personnel or equipment to do so. For example, as one expert noted, large valves in modern plants are now typically operated electronically and seldom used manual operation components (see fig. 7). While SCADA networks offer operators increased flexibility and efficiency by controlling processes remotely, they were not designed with security in mind. The security of these systems is, therefore, often weak. According to our experts, while many facilities take advantage of their system’s flexibility, they often do not provide the necessary training on cyber security or implement security measures such as rotating passwords or securing network connections. Experts also explained that penetration of SCADA systems, particularly those that may be nonencrypted and accessed via the Internet, offers a particularly easy point of access and control of a wastewater system. One expert provided an example of a breach in cyber security in 2000 when such a system in Australia was attacked, causing the release of thousands of gallons of raw sewage. While the actions were not an act of terrorism, they illustrate how a computer or cyber-related attack could be used to disrupt wastewater treatment. Sixteen of the 50 experts identified pumping stations, which are components that help convey sewage to the wastewater treatment plant, as among the top vulnerabilities. One expert explained that destroying or disabling a pumping station could cause the collection system to overflow raw sewage into the streets and into surface waters and to back up sewage into homes and businesses. The expert added that adverse effects on public health and the environment are likely if the target pump station pumps several million gallons per day of wastewater. Another expert explained, that within a service area, one pumping station has the capacity to pump 25 million gallons of wastewater per day. Experts explained that the remoteness and geographic distribution of pumping stations, and their lack of continuous surveillance, make them particularly vulnerable (see fig. 8). However, as one expert noted, should these stations be disabled or destroyed, alternatives such as “pump-around schemes,” where sewage flow is diverted and rerouted, can often be implemented within a few days or weeks. In addition to the physical assets identified as among the greatest vulnerabilities of wastewater systems, some experts also identified vulnerabilities that may affect the overall security of the nations’ wastewater systems. First, they pointed out that wastewater utilities generally do not have a security culture because they are often more focused on operational efficiency and may, therefore, be reluctant to add security procedures and access control elements to their operations. For example, one expert noted the ease with which many types of individuals (employees, contractors, and visitors) and vehicles typically enter wastewater treatment plant facilities. As this expert pointed out, some facilities do not check to ensure that individuals entering the property have legitimate reasons for being there. This expert also raised a concern about the lack of inspection of incoming truckloads at some wastewater treatment plants. An adversary could exploit this lack of security by delivering contaminants or explosives to destroy the treatment process or the entire facility. In addition to securing entrance checkpoints, two experts suggested there is little background screening of utility employees. One expert noted, “People with criminal records, falsified educational credentials, and other serious liabilities might be hired by utilities that fail to thoroughly check their backgrounds. The result can be intentional acts of terrorism on a utility.” Second, experts pointed to interdependencies among all major wastewater assets within the treatment system. The system as a whole relies on the proper working order of all its components to treat a community’s wastewater. One expert explained that, because treatment plants are less able to recover from an attack, they may have a higher level of security than other assets, such as the collection system. However, because collection and treatment are part of one integrated system, securing one asset does not ensure that the system as a whole is more protected. For example, gates and fences around the main treatment plant may stop an adversary from coming onto the physical property, but it will not prevent a harmful agent from entering the facility through the collection system—an event that could destroy the facility’s entire secondary treatment process. Third, experts identified interdependencies between wastewater systems and other critical infrastructures. As several experts explained, disruptions in electric power, cyber systems, and transportation of treatment chemicals can result in a failure of wastewater treatment systems. One expert cautioned that the interruption of the power grid could render the wastewater plant useless, noting, “Several hours without power would cause the biological treatment process to halt and wastewater would back up on the collection system.” Such an event occurred in 2003, when a major power failure caused treatment plants in Cleveland, Ohio, to release at least 60 million gallons of raw untreated wastewater into receiving waters. Without electric power, operators had no other option but to bypass treatment and directly discharge the untreated sewage into Lake Erie or the Cuyahoga River and other tributaries. Conversely, there are instances in which other infrastructure and activities may depend on treated wastewater to properly function. For example, in some parts of the country, effluent is reclaimed and used as cooling water for power generation, to recharge groundwater, or to water outdoor landscapes. One expert noted that wastewater treated at a plant in the arid Western United States is reclaimed and used to provide the only cooling source for a nuclear power plant that provides power for much of that region. According to the same expert, the immobilization of this treatment plant could, within a certain number of days, disable the nuclear plant, causing a major, multistate power outage. Experts most frequently identified 11 specific activities to improve wastewater security as deserving high priority for federal support (see fig. 9). Three activities are particularly noteworthy because they were given a rating of highest priority by a substantial number of the experts. These activities include the following: Replacing gaseous chemicals used in wastewater treatment with less hazardous alternatives. Experts viewed these actions as essential to reduce the vulnerability inherent in systems that rely upon the transport, storage, and use of potentially hazardous materials such as gaseous chlorine in their treatment processes. Several experts noted that replacement could be cost prohibitive for many wastewater utilities and that it, therefore, warranted federal support. Improving local, state, and regional collaboration efforts. Experts identified the development of strong working relationships among utilities and public safety agencies as critical to protecting wastewater infrastructure and system customers from potential threats. Some experts also noted that enhanced partnerships among these groups would result in improved response capabilities should a wastewater system be attacked. Completing vulnerability assessments for individual wastewater systems. Experts cited these as necessary for utilities to understand their security weaknesses, to identify appropriate countermeasures, and to implement risk reduction strategies in a logical, coordinated manner. The remaining eight activities experts frequently rated as warranting high or highest priority for federal funding include (1) providing training to utility employees related to conducting vulnerability assessments and improving the security culture among employees; (2) improving national communication efforts between utilities and key entities responsible for homeland security; (3) installing early warning systems in collection systems to monitor for or detect sabotage; (4) hardening physical assets of treatment plants and collection systems; (5) strengthening operations and personnel procedures; (6) increasing research and development efforts toward improving threat detection, assessment, and response capabilities; (7) developing voluntary wastewater security standards and guidance documents; and (8) strengthening cyber security and Supervisory Control and Data Acquisition (SCADA) systems. Over half of the experts surveyed (29 of 50) rated the replacement of gaseous chemicals at wastewater treatment facilities with less hazardous alternatives as warranting highest priority for federal funding. Another 14 experts rated this activity as high priority. Experts reported that wastewater systems carrying out treatment processes using gaseous forms of chemicals, particularly chlorine, make themselves targets for terrorist attack. However, as one expert noted, changing disinfection technologies effectively devalues these facilities as targets for “weaponization” of their existing infrastructure. Several experts noted that some communities and utilities currently using gaseous chemical treatment processes have expressed interest in converting to an alternative treatment technology, but the financial costs associated with conversion remain prohibitive. However, one stated that replacing gaseous chemical treatment technology can actually result in certain offsetting cost savings. For example, the Blue Plains Wastewater Treatment Plant in Washington, D.C., employed around-the-clock police units prior to replacing its chlorine gas treatment process. Following conversion to a less hazardous treatment technology, Blue Plains found that it could reduce this security posture. In addition, the utility was able to reduce the need for certain emergency planning efforts and regulatory paperwork. Experts suggested alternative treatment technologies such as sodium hypochlorite (a solution of dissolved chlorine gas in sodium hydroxide) and ultraviolet disinfection. These alternative processes have been implemented at several facilities throughout the United States, including Washington, D.C.; Atlanta, Georgia; Philadelphia, Pennsylvania; Cincinnati, Ohio; Jacksonville, Florida; and Harahan, Louisiana. The change, for an individual plant, to sodium hypochlorite may require approximately $12.5 million for new equipment and increase annual chemical costs from $600,000 for gaseous chlorine to over $2 million for sodium hypochlorite. Another expert suggested that reducing the size of containers used to transport and store gaseous chemicals could also prove an effective deterrent to terrorism. This approach is being implemented by a treatment plant in the Western United States, where gaseous chlorine is now stored in 1-ton canisters—a significant reduction in size from the larger 90-ton railroad tanker car size containers the utility previously employed (see fig. 10). Twenty-three of 50 experts rated efforts to improve local, state, and regional collaboration as warranting highest priority for federal funding. Fifteen more experts rated this activity as high priority. Several experts noted the importance of establishing strong working relationships among utilities, local and state law enforcement agencies, fire departments, and other first response agencies in advance of a potential emergency situation. Many added that enhanced partnerships among these entities can yield significant benefits to wastewater utilities including an increased ability to monitor critical infrastructure and facilities, improved understanding of agency roles and responsibilities, and faster response time to deal with potential security breaches. According to one expert, significant personnel and other resources devoted to emergency response are theoretically available to the wastewater sector. These resources include law enforcement agencies, fire departments, public health care facilities, environmental authorities, and other nonprofit and commercial entities. However, the expert noted that wastewater facilities remain largely disconnected from these entities, and wastewater facilities’ efforts for emergency response planning are, therefore, often undertaken independently. Consequently, emergency response teams do not gain a full understanding or appreciation of the unique challenges inherent in maintaining a utility’s wastewater treatment capability. This lack of collaboration perpetuates the community’s idea that “sewers lead to magical place where simply ‘go away’ without consequence,” one expert suggested. The expert added that this misperception is demonstrated by a failure of some in the medical response community to adequately plan for proper disposal of waste resulting from decontamination efforts of a chemical, biological, or radiological event. Directly discharging such material to the wastewater influent stream could significantly damage or destroy the wastewater treatment process. Collaboration among local, state, and regional agencies should include periodic field and “tabletop” exercises to establish and reevaluate the roles, capabilities, and responsibilities of agencies that would respond to a terrorist event, according to one expert. Another identified the nonprofit California Utilities Emergency Association, an entity to which most utilities in that state belong, as an effective provider of communications, training, mutual aid coordination, and simulation exercises. The expert also cited the San Francisco Bay Area Security Information Collaborative as a successful example of regional collaboration in which participating water utilities coordinate communications, responses, and emergency planning. The Environmental Protection Agency (EPA) has provided funding for training on emergency response for wastewater utilities through agreements with the Wastewater Operator State Environmental Training Program, the Water Environment Federation, and other organizations. Through the Department of Homeland Security’s Office of Domestic Preparedness, EPA has funded emergency response table-top exercise training to the nation’s larger wastewater utilities. Twenty of 50 experts rated the completion of vulnerability assessments as warranting highest priority for federal funding. Fourteen other experts rated this activity as high priority. Vulnerability assessments help water utilities evaluate their susceptibility to potential threats and identify corrective actions to reduce or mitigate the risk of serious consequences from vandalism, insider sabotage, or terrorist attack. One expert explained that this process enables a utility to evaluate its terrorist-related vulnerabilities and begin to implement security enhancement plans that directly address those identified vulnerabilities. Another added that the assessments also present useful findings that should be incorporated into a utility’s emergency response plan and that they enable an active process for updating and exercising those plans. The Bioterrorism Act of 2002 required vulnerability assessments for drinking water utilities serving more than 3,300 people but did not include a comparable requirement for wastewater utilities. To foster the completion of vulnerability assessments among wastewater utilities, EPA has funded the development of vulnerability assessment methodologies and provided training to wastewater utilities. EPA has encouraged wastewater utilities to use methodologies such as those provided by the National Environmental Training Center for Small Communities, on security and emergency planning, and the Vulnerability Self Assessment Tool (VSAT), developed and released by the Association of Metropolitan Sewerage Agencies. The VSAT methodology and accompanying software provide an interactive framework for utilities of all sizes to analyze security vulnerabilities to both manmade threats and natural disasters, evaluate potential countermeasures for these threats, and enhance response capability in the event of an emergency situation. This methodology has been continually updated and improved; VSAT Version 3.1 is currently available to utilities. Through EPA support, the Water Environment Federation has provided extensive training of the VSAT tool free of charge to wastewater utility operators and others involved in environmental protection, public safety, and security. Thirteen of the 50 experts rated the expansion of training opportunities for utility personnel as warranting highest priority for federal funding, and an additional 27 experts suggested this activity warranted a high priority. According to experts, creating a security-minded culture among wastewater utilities is critical to building awareness of security vulnerabilities and implementing appropriate countermeasures. In particular, experts noted that wastewater system operators and administrators need to become better educated about the importance of focusing on security and emergency preparedness issues. Several experts suggested that managers should have a full understanding of potential types of terrorist attacks and the systems or mechanisms that could preclude or mitigate these events. They added that other parties, including boards of directors of wastewater systems, mayors, and city councils need to be made aware of potential threats to wastewater systems and the impact a terrorist event could have upon a facility. One expert stated that successful development of security awareness among those associated with wastewater systems could mean the difference between simply installing security systems and actually becoming secure. Experts also stated that additional technical training for operators is necessary to ensure the security of wastewater systems. One noted that this type of training could avert a catastrophe by enabling a wastewater operator to recognize a pending disaster as early as possible. Another expert stated that increased technical training, particularly for smaller wastewater utilities, is necessary to ensure that funds for physical security enhancements are used to their maximum potential, thus achieving maximum benefit for the wastewater utility. One expert also suggested that devoting funding toward increased technical training will provide wastewater utility employees with the skills necessary for developing comprehensive vulnerability assessments and implementing emergency response plans before a terrorist attack. Since 2002, EPA has provided more than $10 million to help address the security needs of the wastewater sector. A large portion of this funding has been awarded to nonprofit technical support and trade organizations to develop tools and training on conducting vulnerability assessments to reduce utility vulnerabilities and on planning for and practicing response to emergencies and incidents. While only 8 of 50 experts rated efforts to improve communications between utilities and federal entities responsible for homeland security as warranting highest priority for federal funding, well over half of the experts surveyed (31 of 50) rated this activity as high priority. One expert stated that it is essential to develop an effective communications strategy that involves the broad range of stakeholders responsible for ensuring wastewater security. Another emphasized that wastewater utilities need timely and useful information from federal authorities about increased threat levels and protective actions that should be implemented. To improve national communications, EPA provided a grant to AMWA to develop the Water Information Sharing and Analysis Center (Water ISAC). The Water ISAC is a secure, Internet-based subscription service that provides time-sensitive information and expert analysis on threats to both wastewater and drinking water systems. It serves as a key link in the flow of water security information among utilities and federal homeland security, intelligence, law enforcement, public health, and environmental agencies. However, according to some experts, Water ISAC does not sufficiently ensure adequate communication between federal agencies and utilities. One stated that despite a high reliance upon Water ISAC by drinking water utilities, this communication vehicle has proven inadequate for meeting the needs of the broad range of stakeholders involved in protecting drinking water security. This expert added that the Water ISAC needs to be better developed if it is to be an essential part of a communications strategy for the wastewater sector. Another expert noted that several water utilities have avoided the Water ISAC because of the subscription fees associated with the service. In the fall of 2004, the Water ISAC announced a new communication tool known as Water Security Channel. The Water Security Channel is a password protected site that electronically distributes federal advisories regarding threat information to the water sector. Water Security Channel is a service that is free of charge to any wastewater or drinking water utility that wishes to participate. For its part, the Department of Homeland Security is implementing its Homeland Security Information Network (HSIN) initiative, which will provide a real-time, collaborative flow of threat information to state and local communities, as well as to individual sectors. According to the department, this network will be the only tool available that provides collaborative communications between first responders, emergency services, the government (local, state, and federal) and other sectors on a real-time basis. In addition, the department has established a Water Sector Coordinating Council to identify information and other needs of the sector, including the appropriate use and the relationships among the Water ISAC, the Water Security Channel, and HSIN. Seven of 50 experts rated the installation of early warning systems in collection systems to monitor for or detect sabotage as warranting highest priority for federal funding, and an additional 31 experts rated this activity as a high priority. A device these experts frequently mentioned to achieve some degree of monitoring and detection for explosive substances is the lower explosive level (LEL) meter, which can be inserted into manholes and connected to central computers. One expert claimed LEL meters have significantly improved response time in mitigating the potential for structural damages resulting from explosions within the wastewater collection system. One expert also noted that disabling the biological processes occurring at a wastewater treatment plant would require a large amount of toxic compounds to be inserted into the collection system, but several experts stated that this possibility remains of concern because of the open access collection systems afford. Many experts suggest that additional research is needed to develop early warning technologies that can sense the presence and concentration of these types of toxic compounds in the collection system and relay that information electronically to treatment operators. Eight of 50 experts rated physical hardening of treatment plants and collection systems as warranting highest priority for federal funding and an additional 29 experts rated this activity as high priority. Experts stated that physically securing the perimeter of the treatment plants and pumping stations with fences, locks, security cameras, alarm systems, motion detection systems, and other physical barriers can protect critical treatment components from direct attack or sabotage (see figs. 11 and 12). One expert noted that the more difficulty terrorists encounter in trying to reach critical targets in a wastewater system, the less frequently attacks will be attempted, and the lesser the impact will be if and when these attempts succeed. Furthermore, improvements to perimeter defenses surrounding wastewater treatment systems not only deter terrorist intruders but also restrict access by vandals, contributing toward improved reliability of electronic surveillance systems. As one expert pointed out, physical hardening of assets can largely be accomplished with hardware that requires only minimal maintenance and replacement cost once installed. Other experts suggested that actions are needed to provide redundant capabilities to wastewater treatment systems. According to experts, additional power, pumping, and collection bypass systems would provide more reliable treatment capacity that would benefit the public not only in the event of terrorism but also during nonterrorist events (e.g., natural disasters, weather-related events, or interrelated infrastructure failures). Such actions could ensure that wastewater systems maintain full treatment capabilities during a variety of unforeseen catastrophic events. Although one expert claimed that protecting the several hundred miles of sewers in a large urban system is virtually impossible, other experts suggested that design improvements and physical alterations could limit access to collection systems. Some experts suggested securing manhole covers with maintenance-friendly lockdown mechanisms. In addition, one expert suggested improving engineering designs for wastewater systems in ways that reduce vulnerability risks posed by infrastructure cross- connections with other water systems. Seven of 50 experts rated the strengthening of operations and personnel procedures at wastewater systems as warranting highest priority for federal funding, and an additional 24 experts rated this activity as a high priority. For example, one expert suggested that a highly efficient background check system should be available to water utilities to get accurate information on new and existing employees, contractors, and others who are working at vital facilities, such as wastewater treatment plants. This expert noted that access to such systems is afforded to airport administrators and certain law enforcement entities but is largely inaccessible to water utilities. Another expert stated that wastewater utilities need procedures to ensure the security of collection system maps and drawings, while also allowing reasonable access to them by contractors and developers. The expert suggested maps could be electronically stored and password protected with a regularly changed password. Another expert suggested that all employees and visitors have identification badges with photographs and electronic strips or sensors that regulate points of access allowed by the badge. Thirteen of 50 experts rated expanded research and development efforts to improve detection, assessment, and response capabilities for wastewater systems as warranting highest priority for federal funding, and an additional 17 experts suggested this activity warranted a high priority. One expert stated that new technologies are needed in the wastewater sector to better protect physical assets by providing reliable surveillance and detection capabilities with a minimal need for on-site, around-the-clock security personnel. According to another expert, technologies currently in development for drinking water utilities could potentially be adapted for use by wastewater utilities. These technologies would need to detect hazardous chemical, biological, or radioactive contaminants while operating in the harsh environment of common, everyday contaminants found in sewage. Also, improved computer mapping systems tracking the course and speed of sewage flow could greatly enhance emergency response activities including evacuations, dilutions of harmful substances that have been introduced to the sewage flow, and venting of volatile materials. EPA’s Office of Research and Development has recently funded research that is intended to address many of these needs. According to an official with EPA’s Water Security Division, while these efforts have been primarily directed toward drinking water security research, some of EPA’s research findings can be applied to wastewater security. EPA has also developed a water security research and technical support action plan that outlines various research and technical support needs that the water industry and other stakeholders have identified. The plan also proposes specific projects to address these needs, and EPA has begun work on some of these projects in collaboration with the Water Environment Research Foundation and the American Water Works Association Research Foundation. These nonprofit research organizations have received funding to address a variety of wastewater security research projects, such as assessing new security technologies to detect and monitor contaminants and prevent security breaches. According to EPA, other issues being addressed include public health protection, vulnerability and protection of water and wastewater infrastructure, and communication in the event of deliberate attacks or natural disasters. Four of 50 experts rated the development of voluntary wastewater security standards and guidance documents as warranting highest priority for federal funding, and half of the experts surveyed (25 of 50) gave this activity a high priority rating. Experts identified options including development and issuance of voluntary standards for security of wastewater facilities (including design standards), a peer review process to evaluate the quality of wastewater utilities’ vulnerability assessments and emergency response plans, and creation of a secure Web site that disseminates lessons learned by utilities throughout the various phases and processes related to protecting wastewater security. One expert suggested that developing government standards for the security of all new facilities would help increase the overall ability of wastewater systems to withstand threats. The expert stated such standards should lay out minimum protection standards and provide a framework of threats utilities should consider when completing vulnerability assessments. Another expert suggested that, because water utilities seek guidance from the federal government on whether their individual treatment plants are secure, one option, in lieu of site visits by EPA, might be a peer review process of vulnerability assessments and emergency response plans across wastewater utilities. Development of a secure Web site for wastewater utilities that includes lessons learned from assessments, planning, training, and incident responses could also provide valuable guidance for wastewater utilities, one expert noted. EPA recently commissioned a study by the National Drinking Water Advisory Council’s Water Security Working Group to address some of these needs. The group’s charge is to identify: (1) the features of an active and effective security program for drinking water and wastewater utilities; (2) incentives that would encourage water utilities to implement features of the security program; and (3) ways to measure the extent of utility implementation of the security program. In addition, in September 2003, EPA gave funding to the American Society of Civil Engineers to develop voluntary security standards for drinking water, wastewater, and stormwater utilities, which were released in December 2004 as interim standards. A training module is planned for spring 2005. Five of 50 experts rated efforts to improve cyber security and SCADA systems as warranting highest priority for federal funding, and an additional 22 experts gave this activity a high priority rating. According to one expert, measures should be taken to minimize access to these systems by improving the security capabilities of hardware systems and software applications, as well as by implementing appropriate information technology security policies at wastewater utilities. One other expert suggested the federal government invest in programs designed to create, accelerate, and deploy minimally acceptable cyber security standards for all automated systems where a compromising event could place a surrounding population at risk. This expert noted that the need for cyber security standards is not limited exclusively to wastewater systems, but stated that the particular needs and characteristics of these utilities should be considered as these standards are developed. Numerous wastewater utilities have begun to address security concerns by completing vulnerability assessments or by undertaking security upgrades. To date, most security initiatives have been financed by reallocating funds from other important utility activities or embedding security into ongoing operations. According to industry representatives, utilities may ultimately have no choice but to pass these costs along to their customers through rate increases. Given the cost of these security actions, however, many in the utility industry believe federal assistance through the congressional appropriations process is warranted. Experts do not all agree that the wastewater industry as a whole should receive funding priority, noting that other sectors such as electricity or transportation may warrant higher priority. Indeed, while the vast majority of our experts did support federal funds for security for wastewater utilities, some voiced dissenting opinions on the matter. Nonetheless, should Congress and the administration agree to a request for funds, they will need to address key issues concerning who should receive the funds and how they should be distributed. With this in mind, we asked our panel of experts to focus on (1) the types of utilities that should receive funding priority and (2) the most effective mechanisms for directing these funds to potential recipients. Overall, we found a high degree of consensus on the following: Thirty-nine of the 50 experts indicated that utilities serving critical infrastructure (including government, commercial, industrial, and public health centers) should be given highest priority for federal funding. Half of the experts gave utilities using large quantities of gaseous chemicals a rating of highest priority while just under half of the experts gave the same rating to utilities serving large populations. Direct federal grants are the most favored funding mechanism, with many experts indicating the circumstances in which such grants should or should not include matching funds from the recipient. Many favored direct grants without a matching requirement for a wide variety of planning and coordination activities, such as completing vulnerability assessments, conducting training, and developing standards and guidance. Cost-shared grants were favored for activities that benefit individual utilities, such as strengthening operation and personnel procedures, installing early warning systems in collection systems, and hardening physical assets. The experts identified several characteristics of utilities that should be used to set funding priorities. The most frequently identified were utilities: (1) serving critical infrastructure including government, commercial, industrial, and public health centers; (2) using large quantities of gaseous chemicals; (3) serving areas with large populations; (4) where a security breach would adversely impact environmental resources (e.g., receiving waters); (5) having completed vulnerability assessments; (6) serving areas with medium or small populations; and (7) serving buildings, monuments, parks, tourist attractions or other entities that have symbolic value (see fig. 13). More than three quarters of the experts (39 of 50) gave utilities serving critical infrastructure a highest priority rating. An additional 10 experts gave these utilities a rating of high priority. These utilities provide service to institutions that serve as hubs for government activity; commercial and industrial centers, such as a city’s financial district, power plants, or major airports; and public health institutions, such as major medical centers and hospitals. As one expert commented, “while every wastewater system is a potential target, it seems prudent to assume that the larger the system or the criticality of facilities served, the greater the potential impact and hence the more likely the target.” Most experts shared this view, including one who said the highest priority should go to “the impact the loss of the treatment facility would have on other vital services” such as providing cooling water for a nuclear or steam generating power plant. Some experts said that systems with heavy commercial and industrial usage are critical to the country’s economic stability, and any major or sustained disruption could have severe economic as well as public health consequences. For example, one expert pointed out that critical industrial customers such as the computer chip manufacturing sector could cost the economy millions per day should a shutdown be caused by the loss of a wastewater treatment plant. More than half of the experts (26 of 50) gave a rating of highest priority for funding of utilities using large quantities of gaseous chemicals. An additional 18 experts rated these utilities as warranting a high priority for federal funds. Some experts pointed out that many wastewater treatment plants use large quantities of elemental chlorine and other toxic materials which, if released to the atmosphere on-site or during transport to the site, would necessitate widespread evacuations, and possibly cause injuries and fatalities. Several experts pointed out that the Environmental Protection Agency’s (EPA) Risk Management Planning program requires industrial facilities that use threshold amounts of certain extremely hazardous substances to self- identify their worst-case chemical release scenarios. An expert cautioned, however, that funds should not be provided to utilities for converting to less hazardous chemicals (e.g., sodium hypochlorite) when other utilities have already or are currently looking at disinfection options that could pose little or no security worker risk, or public health risks. Almost half of the experts (24 of 50) gave a rating of highest priority to utilities serving areas with large populations. Seventeen additional experts rated these utilities as warranting a high priority for federal funds. Many experts shared the view that providing financial and technical assistance to the largest treatment plants would protect the greatest number of people. One expert pointed to EPA’s 2000 Clean Water Needs Survey, which indicated that about 70 percent of the nation’s sewered population is served by the 3,500 largest wastewater facilities (out of a total of 16,000 facilities). Each of these facilities maintains a flow that is greater than 1 million gallons per day. Thus, this expert concluded, funding the largest plants provided benefits to the greatest number of people. Finally, a number of experts suggested that because terrorists are likely to seek to maximize the number of people killed or injured by their attacks, they may try to strike systems serving many customers in large metropolitan areas. While only four experts gave a rating of highest priority to utilities where a security breach would adversely impact environmental resources, 28 of the experts rated these utilities as warranting a high priority. Several experts pointed out the potential for a negative impact on the environment and public health if raw sewage overflows into receiving bodies of water. One expert commented that many wastewater treatment plants discharge highly treated effluent to rivers upstream of the intakes to water treatment plants serving downstream cities. Damage to these wastewater treatment plants could cause the discharge of raw sewage that would be only partially diluted before it reached the intakes of the downstream drinking water treatment plants. Experts also cited significant potential effects on the environment. Some mentioned that the discharge of untreated sewage could impact beaches, critical habitats, or fisheries, causing economic damage in addition to negative environmental and public health effects. Three of the experts gave a highest priority rating to utilities that have completed vulnerability assessments (VAs). An additional 18 experts gave these utilities a high priority rating. Some experts said that only utilities that have completed VAs should be given federal funding. Other experts pointed out that there should be federal funding for those utilities that have not yet completed VAs so that they can complete this key task. As one expert commented, a key benefit of conducting a vulnerability assessment of a wastewater system is that it allows the areas of the greatest need to be identified. Properly conducted, a vulnerability assessment brings in all the necessary divisions within a plant including operations, information technology, management, and external forces such as fire departments and local police. Should a plant demonstrate that it has conducted such an assessment, that plant would be much more likely to use federal funding efficiently, this expert added. Eight of the 50 experts rated utilities serving areas with medium or small populations as a high priority for federal funding. An additional 27 experts rated these utilities as a medium priority. One expert pointed out that such facilities are least able to afford security enhancements or acquire the security expertise and, therefore, may be in need of federal support. The relatively small number of experts giving a high or highest priority rating for utilities serving areas with medium or small populations may not fully reflect the concern among some experts for the safety of these utilities. For example, some who gave a higher priority rating to utilities serving areas with large populations suggested that the need for federal support should be an important associated criterion, regardless of system size. Accordingly, these experts said that some funding could be justified for both large and small populations based on need. One expert favored a bifurcated focus with one effort seeking to ensure minimal levels of security for all utilities, and another expert favored more intensive efforts focusing on systems serving larger populations. Only one expert gave a highest priority rating to utilities serving buildings, monuments, parks, tourist attractions, or other entities that have symbolic value. An additional 10 experts rated these utilities as warranting a high priority. One expert commented that terrorists have already shown that they want to cause serious economic damage by disrupting tourism. Another noted that terrorists have also targeted cities that have stadiums, convention centers, and other attractions where large numbers of people gather. When we asked the experts to identify how best to distribute federal funds that may be made available to utilities to address wastewater security, they overwhelmingly indicated that direct federal grants to utilities would be the most effective mechanism. The experts also indicated that grants in which some type of match is required of recipients would be effective. Relatively fewer experts indicated that the use of trust funds or the Clean Water State Revolving Fund, particularly for upgrades to be implemented in the short term, would be effective. Other mechanisms that were rated as less effective included loans, or loan guarantees, and tax incentives for private utilities. Figure 14 shows how experts rated six different mechanisms for funding wastewater security. Thirty-four of the 50 experts indicated that direct federal grants to the utility would be very effective in allocating federal funds. An additional 12 said these mechanisms would be somewhat effective in doing so. Experts expressed a variety of views regarding how best to implement these grants. For example, some cautioned that a grant program for wastewater security should be solely dedicated to the protection of the wastewater infrastructure, rather than being consolidated together with other programs, such as grants for enhancing homeland security. One said that, contrary to the way grant programs usually operate, utilities should be allowed to apply for grants during project implementation or even after the project is completed. This could reward those who were proactively addressing their security needs. Among other suggestions, one expert said that EPA and the Department of Homeland Security (DHS) should collaborate on allocating these grant funds. This expert stated that “EPA has technical knowledge about facility operations that is especially important and DHS has grant funds for homeland security that could be quickly made available until Congress approves a special allocation.” Some experts also commented that direct grants are preferable because they are more likely to result quickly in safety improvements and other desired changes. Experts also offered opinions on situations in which it would be appropriate to offer a grant without requiring a matching contribution from the recipient. Many, for example, favored direct grants with no match for activities that benefit multiple utilities, or which should be addressed in the near term. Such actions would include conducting research and development to improve detection, developing voluntary wastewater security standards and guidance, completing vulnerability assessments, and providing training to utility security personnel on how best to conduct vulnerability assessments and improve the security culture. Thirty of the 50 experts indicated that grants with a matching requirement (cost-shared grants) would be very effective as a mechanism for providing funds to wastewater utilities. An additional 16 rated such grants as somewhat effective. Experts generally favored cost-shared grants for activities that benefit individual utilities. For example, 38 of the 50 experts indicated that cost- shared grants were best for strengthening operation and personnel procedures, such as securing sewer maps and conducting background checks on new employees. Almost three-quarters of the experts (36 of 50) indicated that cost-shared grants were also best for installing early warning systems in collection systems to monitor for or detect sabotage. Similarly, 32 of the 50 experts indicated that recommended cost-shared grants would be best for improving cyber security and for activities required to harden physical assets, such as building fences, installing locks, and securing manhole covers. The Clean Water State Revolving Fund (CWSRF) is an EPA-administered program that provides grants to the states to allow them to assist publicly owned wastewater utilities. States, in turn, use the funds to provide loans to participating wastewater utilities to assist them in making infrastructure improvements needed to protect public health and ensure compliance with the Clean Water Act. Five experts indicated that the CWSRF would be a very effective funding mechanism to improve wastewater security. An additional 35 indicated that it would be somewhat effective. According to an EPA Fact Sheet, states may use the CWSRF to assist utilities in completing a variety of security-related actions, such as vulnerability assessments, contingency plans, and emergency response plans. In addition, the EPA Fact Sheet identifies other infrastructure improvements that may be eligible for CWSRF funds, such as the conversion from gaseous chemicals to alternative treatment processes, installation of fencing or security cameras, securing large sanitary sewers and installing tamper-proof manholes. Some experts said that the advantage of the CWSRF is its ability to leverage appropriated federal funds, thereby enabling it to assist more facilities than direct federal grants. A number of experts, however, expressed caution about relying heavily on the CWSRF to support security enhancements. Several questioned whether the CWSRF was appropriate in an environment where quick, emergency- related decisions were needed, noting that the administrative process in applying for and receiving the funds can be lengthy. Another noted that the CWSRF “was not originally established to deal with security-related projects,” and that the program therefore “either needs to fixed to deal with security issues or a separate program needs to be created specifically for security projects.” Another expert noted that unless additional security- related monies were added to existing CWSRF levels, it would divert much needed funding away from the kind of critical infrastructure investments that have been the CWSRF’s primary purpose. Loans are a disbursement of funds by the government to a nonfederal borrower under a contract that requires the repayment of such funds with or without interest. Loan guarantees represent a nonfederal loan to which a federal guarantee is attached. Only one expert indicated that loans and loan guarantees would be very effective mechanisms for providing federal support for wastewater security. An additional 34, however, indicated they would be somewhat effective. Generally, these experts cited the primary advantage of loans or loan guarantees as offering communities the option to amortize security-related costs over an extended period of time, while minimizing the overall cost to the federal treasury. Another expert commented that a low interest loan could provide some incentive and needed capital to implement security programs. A number of experts, however, expressed reservations. One cautioned that the establishment of any federal loan program to support wastewater security needs should not come at the expense of federal support for the CWSRF, given the critical infrastructure needs that already depend on it for support. Another questioned the value of loans to utilities already strapped for funds, noting that “while loans have less impact on the federal government, many wastewater utilities and local governments generally carry a heavy debt load for capital improvements, and they cannot add significant additional debt that could affect their bond ratings.” Federal trust funds are accounting mechanisms used to link receipts (from particular taxes or other sources) that by law have been dedicated for a specific purpose or program, such as for infrastructure improvement. For example, such a mechanism is in place for the transportation sector through the Highway Trust Fund. Eight experts indicated that trust funds would be a very effective mechanism for distributing funds for the wastewater security sector. An additional 7 said they would be somewhat effective. However, almost half of the experts (24 of 50) indicated that they either had no opinion on this subject or that trust funds were “neither effective nor ineffective.” Experts raised a number of issues as to how the trust fund concept would be implemented. A key consideration was whether the fund would be dedicated solely to wastewater security needs, or be part of a broader fund that serves other wastewater infrastructure needs. One expert suggested that, if wastewater security needs have to compete with the broader range of the wastewater industry’s infrastructure needs, they may not receive sufficient priority to be funded adequately. Another expert suggested that a trust fund should be supported annually by the federal government and local wastewater utilities, and administered in a manner similar to the former Wastewater Construction Grants program that funded wastewater construction. This expert indicated that the fund should be used exclusively for enhancing wastewater security. Federal tax-based incentives may include new tax credits for spending on security improvements and the existing exemptions from federal income tax of interest income from state and local government bonds. One expert indicated that tax incentives are very effective, and an additional 14 said they are somewhat effective. Notably, 20 experts indicated that tax-based incentives would be very ineffective—a result due in part to the fact that most wastewater utilities are publicly owned and operated and would, therefore, not benefit from tax-based incentives, like tax credits that would be used to reduce federal income tax. Nonetheless, some experts said that for the smaller proportion of privately owned systems, tax-based incentives could be beneficial and particularly efficient. One expert noted, for example, that “in those cases where the wastewater treatment facility is privately owned, nothing succeeds as well as tax incentives.” Recognizing the diversity of wastewater systems, this expert stated further that the owners know their utility better than anyone and are best able to achieve results in a more cost effective way, if they are incentivized. To date, the federal government’s role in promoting wastewater security has been limited primarily to supporting various training activities on completing vulnerability assessments and emergency response plans and several research projects addressing how contaminants affect treatment systems and other areas. However, legislation supporting an expanded federal role, including a substantially greater financial commitment, has been proposed in the past and may be considered again in the future. Should such funds be appropriated, key judgments about which recipients should get funding priority, and how those funds should be spent, will have to be made in the face of great uncertainty about the likely target of an attack (i.e., a large but well-protected facility versus a smaller but less- protected facility); the nature of an attack (cyber, chemical, biological, radiological); and its timing. The experts on our panel have taken these uncertainties into account in deriving their own judgments about these issues. These views, while not unanimous, suggest some degree of consensus on a number of key issues. We recognize that such sensitive decisions ultimately must take into account a variety of political, equity, and other considerations. We believe they should also consider the judgments of the nation's most experienced individuals on these matters, such as those included on this panel. It is in this context that we offer these results as an input into the decision-making process that Congress and the administration will likely go through as they seek to determine how best to use limited financial resources to reduce the vulnerability to the nation's wastewater utilities. | Since the events of September 11, 2001, the security of the nation's drinking water and wastewater infrastructure has received increased attention from Congress and the executive branch. Wastewater facilities in the United States provide essential services to residential, commercial, and industrial users by collecting and treating wastewater and discharging it into receiving waters. These facilities, however, may possess certain characteristics that terrorists could exploit either to impair the wastewater treatment process or to damage surrounding communities and infrastructure. GAO was asked to obtain experts' views on (1) the key security-related vulnerabilities affecting the nation's wastewater systems, (2) the activities the federal government should support to improve wastewater security, and (3) the criteria that should be used to determine how any federal funds are allocated to improve security, and the best methods to distribute these funds. GAO conducted a systematic, Web-based survey of 50 nationally recognized experts to seek consensus on these key wastewater security issues. EPA expressed general agreement with the report, citing its value as the agency works with its partners to better secure the nation's critical wastewater infrastructure. Experts identified the collection system's network of sewer lines as the most vulnerable asset of a wastewater utility. Experts stated that the sewers could be used either as a means to covertly gain access to surrounding buildings or as a conduit to inject hazardous substances that could impair a wastewater treatment plant's capabilities. Among the other vulnerabilities most frequently cited were the storage and transportation of chemicals used in the wastewater treatment process and the automated systems that control many vital operations. In addition, experts described a number of vulnerabilities not specific to particular assets but which may also affect the security of wastewater facilities. These vulnerabilities include a general lack of security awareness among wastewater facility staff and administrators, interdependencies among various wastewater facility components leading to the possibility that the disruption of a single component could take down the entire system, and interdependencies between wastewater facilities and other critical infrastructures. Experts identified several key activities as most deserving of federal funds to improve wastewater facilities' security. Among those most frequently cited was the replacement of gaseous chemicals used in the disinfection process with less hazardous alternatives. This activity was rated as warranting highest priority for federal funding by 29 of 50 experts. Other security-enhancing activities most often rated as warranting highest priority included improving local, state, and regional collaboration (23 of 50 experts) and supporting facilities' efforts to comprehensively assess their vulnerabilities (20 of 50 experts). When asked how federal wastewater security funds should be allocated among potential recipients, the vast majority of experts suggested that wastewater utilities serving critical infrastructure (e.g., public health institutions, government, commercial and industrial centers) should be given highest priority (39 of 50). Other recipients warranting highest priority included utilities using large quantities of gaseous chemicals (26 of 50) and utilities serving areas with large populations (24 of 50). Experts identified direct federal grants as the most effective method to distribute the funds, noting particular circumstances in which a matching contribution should be sought from recipients. Specifically, a matching requirement was often recommended to fund activities that benefit individual utilities. Grants with no matching requirements were often recommended for activities that should be implemented more quickly and would benefit multiple utilities. The other funding mechanisms experts mentioned most frequently included the federal Clean Water State Revolving Fund, loans or loan guarantees, trust funds, and tax incentives. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Congress funds NNSA’s modernization efforts through various programs and activities within the Weapons Activities appropriations account that generally address the following four areas: The stockpile area includes weapons refurbishments through LEPs and other major weapons alterations and modifications; surveillance efforts to evaluate the condition, safety, and reliability of stockpiled weapons; maintenance efforts to perform certain minor weapons alterations or to replace components that have limited lifetimes; and core activities to support these efforts, such as maintaining base capabilities to produce uranium and plutonium components. NNSA allocates funds to activities that directly support the stockpile area through Directed Stockpile Work within the Weapons Activities appropriation account. The infrastructure area includes government-owned, leased, and permitted physical infrastructure and facilities supporting weapons activities. NNSA’s 2016 nuclear security budget materials include information on two major types of infrastructure activities: (1) Infrastructure and Safety and (2) Readiness in Technical Base and Facilities, which includes two major construction projects. First, the Uranium Processing Facility is a construction project to replace enriched uranium capabilities currently located in the aging Building 9212 at the Y-12 National Security Complex. This project is part of a larger strategy to maintain NNSA’s enriched uranium capability by relocating enriched uranium operations performed in Building 9212 into other existing buildings by 2025 and by constructing a series of smaller buildings. Second, the Chemistry and Metallurgy Research Replacement construction project at Los Alamos National Laboratory, which is part of NNSA’s broader plutonium infrastructure strategy, is composed of subprojects to move analytical chemistry and materials characterization capabilities into two existing facilities. NNSA’s broader plutonium infrastructure strategy also includes the construction of at least two additional modular structures that the Fiscal Year 2016 Stockpile Stewardship and Management Plan reports will achieve operating capacity by 2027. The Uranium Processing Facility and the Chemistry and Metallurgy Research Replacement construction projects are both part of NNSA’s major modernization efforts. The research, development, testing, and evaluation area is composed of programs that are technically challenging, multiyear, multifunctional efforts to develop and maintain critical science and engineering capabilities. These capabilities enable the annual assessment of the safety and reliability of the stockpile, improve understanding of the physics and materials science associated with nuclear weapons, and support the development of code-based models that replace underground testing. The other weapons activities area includes budget estimates associated with nuclear weapon security and transportation, as well as legacy contractor pensions, among other things. The four areas are interconnected. For example, experiments funded under the research, development, testing, and evaluation program area can contribute to the design and production of refurbished weapons, which is funded under the stockpile program area. The infrastructure program area offers critical support to both the stockpile and the research, development, testing, and evaluation program areas by providing a suitable environment for their various activities, such as producing weapons components and performing research and experimentation activities. The U.S. nuclear weapons stockpile is composed of seven different weapons types, including air-delivered bombs, ballistic missile warheads, and cruise missile warheads (see table 1). NNSA’s 2016 budget estimates for modernization total $297.6 billion over 25 years, which is a slight increase from the 2015 estimates of $293.4 billion; however, for certain program areas or individual programs, budget estimates changed more significantly. The overall increase was moderated by a shift of two counterterrorism programs to another area of NNSA’s budget. Program areas increased by as much as 13.2 percent or decreased by as much as 18.1 percent. Within the stockpile program area, which experienced the biggest increase, budget estimates for some LEPs and an alteration increased significantly because of changes in production schedules and scope, among other things. According to the Fiscal Year 2016 Stockpile Stewardship and Management Plan, NNSA’s estimates for the next 25 years total $297.6 billion for modernization activities—an increase of approximately $4.2 billion, or 1.4 percent (in nominal, or current dollar, values), from the $293.4 billion NNSA reported in the 2015 plan. These budget estimates, which are for activities in the Weapons Activities area, are provided in the four areas discussed above: stockpile; infrastructure; research, development, testing, and evaluation; and other weapons activities. The overall increase was moderated by the shift of two counterterrorism programs from the Weapons Activities budget into NNSA’s separate Defense Nuclear Nonproliferation budget. The two counterterrorism programs that were moved out of the Weapons Activities budget together totaled approximately $8 billion. According to NNSA’s 2016 budget justification, this realignment is intended to provide greater clarity regarding the total funding and level of activity in the counterterrorism area. The realignment of these programs, along with other smaller decreases in the other weapons activities category, together accounted for an 18.1 percent decrease in the other weapons activities category during the 25-year period covered by the plan. Without the realignment of the two counterterrorism programs, the increase in NNSA’s overall Weapons Activities budget in the 2016 plan would have been considerably larger, totaling approximately $12.3 billion, or 4.2 percent, over the 2015 Weapons Activities budget. Table 2 details the changes in NNSA’s 25-year budget estimates from 2015 to 2016 for the four main areas in which modernization efforts are funded under Weapons Activities. In addition, budget estimates changed significantly for certain program areas and individual programs. Notably, the 2016 budget materials estimate that during the next 25 years, $117.2 billion will be needed for the stockpile area, which is an increase of $13.7 billion, or 13.2 percent, over the prior year’s budget materials. Part of this increase resulted from the addition of approximately $3 billion to support the Domestic Uranium Enrichment program, as well as increases in estimates for weapons refurbishment activities, particularly LEPs, as discussed later in this report. The 2016 budget materials indicate a decrease of approximately $1.8 billion for infrastructure activities during the next 25 years, compared with the 2015 estimates, in part because of reductions in recapitalization and site operation budget estimates. The 2016 budget materials increased proposed spending on research, development, testing, and evaluation activities by approximately $900 million during the same period. This increase resulted in part from an increase in estimates for the Inertial Confinement Fusion Ignition and High Yield program. Budget estimates in the Fiscal Year 2015 Stockpile Stewardship and Management Plan cover 2015 to 2039, while those in the 2016 plan cover 2016 to 2040. We compared the two sets of estimates by summing up the current dollar values for each, which is how NNSA reports the estimates. The total from the 2016 plan is different from the 2015 plan’s total in that the former includes the year 2040 and excludes the year 2015. Because of the effect of inflation, this comparison could make the difference between the 2016 projection and the 2015 projection appear higher than it would be in the case of a comparison of the two series in real dollar values or in a comparison that looks strictly at the years that overlap from each plan. In the Fiscal Year 2016 Stockpile Stewardship and Management Plan, estimates for some major modernization projects increased significantly from those in 2015. Specifically, regarding the weapons refurbishment efforts—which are captured within the stockpile category in the budget— the 2016 budget materials indicate that during the next 25 years, $49.8 billion will be needed to support LEPs and other weapons alteration activities, which is an increase of $8.2 billion, or 19.6 percent, compared with the prior year’s estimate of $41.7 billion. This increase resulted partly from the change in the scope and schedule for some programs, as discussed below. The W88 Alteration 370 effort expanded to include a conventional high explosive replacement while retaining the original schedule for a first production unit in 2020. To support this replacement, NNSA shifted planned spending for other programs—including $15.1 million originally planned for the W76-1 LEP—toward this effort. The Fiscal Year 2016 Stockpile Stewardship and Management Plan reported that the agency also shifted planned spending intended for surveillance of B61 and B83 bombs into the conventional high explosive replacement effort. The Fiscal Year 2016 Stockpile Stewardship and Management Plan estimated the total cost for the W88 Alteration 370 at $2 billion over the 25-year period covered by the plan, while the 2015 plan estimated the total cost at $1.2 billion, for an increase of approximately $0.8 billion. The cruise missile warhead LEP (renamed the W80-4 LEP) now has a first production unit planned for 2025—2 years earlier than the first production unit in the 2015 plan. This shift in schedule is intended to align with revised Air Force plans for the carrier missile. The Fiscal Year 2016 Stockpile Stewardship and Management Plan estimated the total cost for the LEP at $8.2 billion over 25 years, while the 2015 plan estimated the total cost at $6.8 billion, for an increase of approximately $1.5 billion. The Fiscal Year 2016 Stockpile Stewardship and Management Plan included a budget estimate for the B61-13 LEP that did not appear in the 2015 plan. This LEP, which NNSA officials stated is intended to replace the B61-12 LEP, is currently planned to begin in 2038, with an estimated cost of approximately $1.2 billion from 2038 through 2040. Budget estimates for the three interoperable warhead LEPs—the IW- 1, 2, and 3—together accounted for an increase of $5.6 billion over 25 years when compared with the Fiscal Year 2015 Stockpile Stewardship and Management Plan budget estimates. According to the plan, this increase resulted from updated estimates developed through an expanded methodology that incorporated additional stakeholder input into the process that NNSA used to arrive at the estimates, and which resulted in a better understanding of schedule and cost uncertainty. NNSA officials stated that they continue to use stakeholder input to update and assess the cost estimate methodology. The budget estimates for the B61-12 and W76-1 LEPs together accounted for a decrease of almost $1 billion when compared with 2015 estimates. NNSA officials stated that this decrease is the result of the LEPs’ costs winding down as the programs come to an end. Table 3 shows the changes in budget estimates for the weapons refurbishment activities under way during the 25-year period covered by the Fiscal Year 2016 Stockpile Stewardship and Management Plan. Milestone dates for most major modernization projects generally remained the same in the 2016 plan compared with the previous year. The 2010 Nuclear Posture Review included discussion of a number of planned major modernization efforts for NNSA, while other efforts have been identified in later versions of the Stockpile Stewardship and Management Plan and in the 2011 update to the DOD-DOE joint report. Table 4 shows key milestone dates for LEPs and major construction efforts as they have changed since 2010. Estimates for the two major construction projects we reviewed—the Uranium Processing Facility and the Chemistry and Metallurgy Research Replacement construction project—did not change or saw a reduction in estimates along with a recategorization of costs. These projects, included in the infrastructure category in NNSA’s budget materials, support NNSA’s uranium and plutonium strategies, respectively. The Uranium Processing Facility project budget line in the Fiscal Year 2016 Stockpile Stewardship and Management Plan stayed the same as reported in the 2015 plan, with a total estimated budget of $5.2 billion from 2015 through the project’s planned completion in 2025. The 2016 budget estimates for the Chemistry and Metallurgy Research Replacement construction project decreased, and in comparison to the 2015 budget materials, these estimates also shifted from one budget category to another. The Fiscal Year 2015 Stockpile Stewardship and Management Plan included a line for budget estimates for this project; however, the estimates were zero for each year except for 2012. The 2015 plan included budget estimates that totaled $3.1 billion in the program readiness subcategory under the infrastructure category, which NNSA officials stated were ultimately intended for the Chemistry and Metallurgy Research Replacement construction project. In the Fiscal Year 2016 Stockpile Stewardship and Management Plan, NNSA shifted $1.7 billion in planned spending out of program readiness and into the construction project’s line item, also under the infrastructure category. This shift appears to be an increase in the total amount for major construction activities in the 2016 budget materials. However, as noted above, the overall total for infrastructure declined slightly, in part because NNSA officials said that they determined that the remainder of the $3.1 billion from program readiness is not required to support the project. Nevertheless, the $1.7 billion reported in the Fiscal Year 2016 Stockpile Stewardship and Management Plan is $214 million lower than the total estimates that NNSA reported in its 2016 congressional budget justification, which included a more detailed construction project data sheet for the project. An NNSA official confirmed that this amount should have been included in the plan and its omission was the result of a data entry error. Consequently, the amount for the project in the construction line item should be approximately $1.9 billion. The Fiscal Year 2016 Stockpile Stewardship and Management Plan includes a goal to stop the growth of the agency’s deferred maintenance backlog. The plan notes that there has been limited availability for capital and maintenance funding in recent years, but NNSA officials stated that they are working to ensure that there is no increase in deferred maintenance relative to the level at the end of 2015. In August 2015, we found that NNSA’s infrastructure budget estimates were not adequate to address its deferred maintenance backlog and that the backlog would continue to grow. We recommended that in instances where budget estimates do not achieve DOE benchmarks for maintenance and recapitalization investment over the 5-year budget estimates, NNSA identify in the budget materials the amount of the shortfall and the effects, if any, on the deferred maintenance backlog. We also recommended that until improved data about the importance of facilities and infrastructure to mission are available, NNSA clarify in the budget materials for the Future-Years Nuclear Security Program the amount of the deferred maintenance backlog that is associated with facilities that have little to no effect on programmatic operations and is therefore low priority to be addressed. NNSA concurred with our recommendations. Specifically, NNSA agreed to include more information on maintenance, recapitalization, and deferred maintenance on excess facilities and stated that it will address them in the 2017 budget request or budget support materials as appropriate. Similarly, NNSA officials agreed that until improved data about the importance of facilities and infrastructure to the mission are available, they plan to clarify in the budget materials for the Future-Years Nuclear Security Program the amount of the deferred maintenance backlog that is associated with facilities that have little to no effect on programmatic operations and is therefore low priority to be addressed. The estimates in NNSA’s 2016 nuclear security budget materials may not align with plans for some major modernization efforts for several reasons. In particular, the Fiscal Year 2016 Stockpile Stewardship and Management Plan includes several major modernization efforts that may require more funding in some years than the plan reflects, raising questions about the alignment of NNSA’s modernization plans with potential future budgets. In addition, for some nuclear weapon refurbishment programs, the low end of NNSA’s internally developed cost ranges exceeds the estimates included in the budget materials. Further, some costs, such as those for certain infrastructure upgrades, are not included in NNSA’s budget estimates, and dependency on other NNSA programs could lead to increases in program costs. NNSA officials provided various reasons for the discrepancies, which they said could be addressed in future planning. The Fiscal Year 2016 Stockpile Stewardship and Management Plan’s estimates for Weapons Activities are $4.4 billion higher than the out-year projections for funding levels in the President’s budget provided in the DOD-DOE joint report. Specifically, for the years 2021 through 2025—the 5 years after the 2016 Future-Years Nuclear Security Program—the Fiscal Year 2016 Stockpile Stewardship and Management Plan’s Weapons Activities budget estimates total $56.6 billion. However, these budget estimates exceed a set of out-year projections for nuclear modernization and sustainment activities over the same time period. Specifically, the DOD-DOE joint report included additional information on out-year projections in the 2016 President’s budget for Weapons Activities through 2025. These out-year projections total $52.2 billion from 2021 to 2025, or $4.4 billion less than DOE’s budget estimates over the same time period (see table 5). This misalignment between the Fiscal Year 2016 Stockpile Stewardship and Management Plan and the estimates described as out-year projections in the President’s budget in the DOD-DOE joint report corresponds to a challenging period for NNSA modernization efforts, as the agency plans to simultaneously execute at least four LEPs along with several major construction projects, including efforts to modernize NNSA’s uranium and plutonium capabilities. The differences between these two sets of numbers raise questions about the alignment of NNSA’s modernization plans with potential future budgets. NNSA notes this issue in the Fiscal Year 2016 Stockpile Stewardship and Management Plan and states that it will need to be addressed as part of fiscal year 2017 programming. According to an NNSA official from the office that coordinated production of the Fiscal Year 2016 Stockpile Stewardship and Management Plan, the additional line of out-year projections in the 2016 President’s budget was included in the 2016 DOD-DOE joint report at the request of the Office of Management and Budget. This official told us that the out-year projections included in the DOD-DOE joint report represent DOE’s evaluation of what modernization activities will cost for these years based on current plans and available information. NNSA officials also stated that the President’s budget information was included in the 2016 DOD-DOE joint report to show that the administration has not yet agreed to fund these activities beyond the Future-Years Nuclear Security Program at the level reflected in NNSA’s budget estimates. In addition, NNSA officials stated that there is a high level of uncertainty in the budget estimates beyond the Future-Years Nuclear Security Program, which makes planning beyond 5 years difficult. On the basis of our analysis of NNSA’s internally developed cost ranges for certain major weapon modernization efforts, we found that the low end of these ranges sometimes exceeded the estimates that NNSA included for those programs in its budget materials. We analyzed NNSA’s budget estimates for nuclear weapon refurbishments over the 25 years covered in the Fiscal Year 2016 Stockpile Stewardship and Management Plan— the W76-1, the B61-12, the B61-13, the W80-4, and the IW-1, 2, and 3 LEPs, as well as the W88 Alteration 370. The Directed Stockpile Work category in the plan and in the 2016 Future-Years Nuclear Security Program contain detailed budget information on weapon refurbishment efforts that includes specific budget estimates for each effort as well as high and low cost ranges that NNSA developed for them. For each effort, we assessed the extent to which the budget estimates aligned with its high-low cost estimates. Specifically, we examined instances where the low end of the cost range estimates was greater than the budget estimates. We found that the annual budget estimates are generally consistent with NNSA’s internal cost estimates; that is, in most years, the annual budget estimates for each weapon refurbishment effort fall within the high and low cost ranges that NNSA developed for each program. However, in some years, NNSA’s budget estimates for some refurbishment efforts may not align with modernization plans. Specifically, for some years, the low end of cost ranges that NNSA developed for some LEPs exceeds the budget estimates. This indicates potential misalignment between plans and budget estimates for those programs in those years, or the possible need for NNSA to increase budget estimates for those programs in the future. For instance, see the following: The B61-12 LEP’s budget estimates during the 5-year period covered by the Future-Years Nuclear Security Program align with plans. However, the low cost range estimate of $195 million for the final year of production in 2025 exceeds the budget estimate of $64 million. NNSA officials said that this difference is not a concern because this misalignment occurs during the final year of the LEP effort and this estimate may overstate costs for the end of B61-12 program. The W88 Alteration 370’s low cost range estimate exceeds its budget estimate for 2020. The budget materials report that the program’s budget estimate that year is $218 million; however, the low point of the cost range is $247 million. NNSA officials stated that this is not a concern because there is flexibility to address possible misalignments in future programming cycles. NNSA officials also stated that the total estimates for this program are above the total of the midpoint cost estimates for 2016 through 2020 and that funding for 2016 to 2019 is fungible and could be carried over to cover any potential shortfall in 2020. The W80-4 LEP’s low range cost estimate of $476 million exceeds its budget estimates of $459 million for 2020. NNSA officials stated that because the budget estimates for this LEP are above the low point of its estimated cost range during other years, the misalignment in 2020 represents a small incongruity in an otherwise sound LEP profile. The budget estimates for the IW-1 LEP are within the high and low estimated cost ranges for most years. However, the IW-1’s low cost range estimate of $175 million exceeds its budget estimate of $113 million in 2020, which is its first year of funding. NNSA officials said that by shifting funding projected for 2021 to 2020, the IW-1 budget estimates would still be within the cost ranges. For the W76-1 LEP, we compared the budget estimates in the 2016 Future-Years Nuclear Security Program and the Fiscal Year 2016 Stockpile Stewardship and Management Plan with internal cost estimates NNSA developed for the LEP. We found that the budget estimates for all years within the Future-Years Nuclear Security Program, except for 2018, are below NNSA’s internal cost estimates for that program, raising questions about whether the budget for the LEP is aligned with anticipated costs. According to NNSA officials, the W76-1 LEP is nearing completion, and the model used to develop internal cost estimates for the W76-1 is predicting the LEP’s end-of-program costs in a way that may not reflect the rate at which the program winds down. For more information on the LEPs and their budget estimates and cost ranges in the Fiscal Year 2016 Stockpile Stewardship and Management Plan, see appendix II. NNSA officials stated that the intent in providing budget estimates and cost range estimates for each weapon refurbishment effort is to show general agreement between the two sets of estimates. Notwithstanding the differences we identified between budget estimates and low-end cost range estimates for certain efforts in certain years, NNSA officials stated that the budget estimates and the cost range estimates are in general agreement for each LEP and alteration in terms of total costs and trend. In addition, NNSA officials stated that there is some flexibility in the funding for these efforts, and that the programs may carry over some funds from one year to the next if needed to cover costs, depending on the reason for the misalignment, among other things. In our August 2015 report on NNSA’s nuclear security budget materials, we found that not including information that identifies potential misalignments in LEP budget estimates compared with the LEP internal cost estimates can potentially pose risks to the achievement of program objectives and goals, such as increase in program costs and schedule delays. NNSA agreed with our recommendation from that report to provide more transparency with regard to shortfalls in its budget materials. Specifically, NNSA said that it would include, as appropriate, statements in future Stockpile Stewardship and Management Plans on the effect of funding an LEP effort at less than suggested by a planning estimate cost range. NNSA officials also said that the agency plans to incorporate this recommendation, among others, into its 2017 budget materials. We identified instances where certain modernization costs were not included in budget estimates or may be underestimated. For example, see the following: The budget estimates for the W88 Alteration 370 with a conventional high explosive replacement—or “refresh”—are understated, according to NNSA officials. The budget estimates for the refresh reported in the 2016 budget materials are roughly $300 million less than the refresh requires. Officials told us that the initial budget planning for the refresh contained a cost of approximately $500 million. However, NNSA found that this estimate was incorrect and increased it to approximately $800 million. NNSA officials stated that this project is still in the process of establishing a new, official baseline, which officials expect to complete in 2016. The 2016 budget materials may not contain all necessary costs for NNSA’s efforts to maintain its enriched uranium capability, which include relocating select operations performed in Building 9212 to other existing buildings and constructing a series of smaller buildings. Specifically, NNSA officials stated that the budget estimates in the 2016 budget materials for these efforts do not include the costs associated with infrastructure upgrades (such as ceiling repairs and heating, air conditioning, and other controls systems) in two existing buildings at the Y-12 site. NNSA officials stated that the scope to maintain operations in the existing facilities is being developed and prioritized into a multiyear effort among multiple programs, separate from the Uranium Processing Facility project. According to another NNSA official, these costs were still under development, but the official estimated that the upgrades may cost tens of millions of dollars for each building. The costs of the plutonium infrastructure strategy—in which NNSA is currently preparing to move analytical chemistry and materials characterization capabilities into existing facilities as part of the Chemistry and Metallurgy Research Replacement construction project while also considering constructing new modular buildings under a separate project—are also uncertain and possibly underestimated. This uncertainty is due to the fact that NNSA has not yet determined the number of additional modular buildings that may be required, although the Fiscal Year 2016 Stockpile Stewardship and Management Plan calls for at least two. NNSA officials also stated that estimated costs for these efforts have not yet been baselined and that the cost of such a project cannot be estimated with any certainty until it has proceeded further into the planning process and established a baseline. In addition to some costs not being included in budget estimates, the estimates for some NNSA modernization efforts could increase in the future because of their dependency on successful execution of other NNSA programs. Specifically, NNSA managers for the LEPs stated that some of these programs could incur future cost increases or schedule delays because of other NNSA programs supporting the LEPs. For instance, NNSA officials told us that the W80-4 LEP will require a new insensitive high explosive to support the system. This is because the B61- 12 LEP is consuming the currently available stocks of insensitive high explosive. As a result, NNSA is developing a new insensitive high explosive to meet the needs of the W80-4 LEP. However, NNSA officials told us that the performance of the new explosive currently being produced is not comparable to the quality of existing explosive being consumed by the B61-12 LEP. Consequently, these officials stated that the costs of the W80-4 LEP could rise because of additional funding that may be required to further develop the new explosive. The Fiscal Year 2016 Stockpile Stewardship and Management Plan notes that as design options are down selected, the budget estimate for the W80-4 may shift in response. An NNSA official also stated that the IW-1 LEP budget estimates in the 2016 budget materials are predicated on NNSA successfully modernizing its plutonium pit production capacity. The official stated that if there are delays in the current plutonium infrastructure strategy, the IW-1 LEP will bear costs that are greater than currently estimated to produce the number of additional plutonium pits it needs to support the program. The Fiscal Year 2016 Stockpile Stewardship and Management Plan notes that estimates for programs in their earlier stages, such as the IW-1 LEP, are subject to uncertainty. We previously found that NNSA has experienced significant cost increases and schedule delays in its earlier strategies to modernize its plutonium pit production support facilities at Los Alamos National Laboratory. We have ongoing work examining the Chemistry and Metallurgy Research Replacement construction project in more detail. We provided a draft of this report to DOE and NNSA for their review and comment. NNSA provided written comments, reproduced in appendix III, in which it stated that it will continue to enhance information on potential funding levels in future budget supporting materials. NNSA also provided technical comments separately, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Energy, the Administrator of NNSA, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Our objectives were to assess (1) the extent to which the National Nuclear Security Administration’s (NNSA) budget estimates and plans for modernization activities reflected in its fiscal year 2016 nuclear security budget materials differ, if at all, from those in its fiscal year 2015 budget materials and (2) the extent to which the fiscal year 2016 nuclear security budget materials align with modernization plans as presented in the Stockpile Stewardship and Management Plan. We limited the scope of our review to NNSA’s Weapons Activities appropriations account, because NNSA’s activities in the Stockpile Stewardship and Management Plan are funded by this account. This scope is consistent with that of our August 2015 review. We focused our review on major modernization efforts—that is, the refurbishment of nuclear weapons through life extension programs (LEP) and alterations and major construction efforts to replace existing, aging facilities for plutonium and uranium. The budget projections in the 2015 and 2016 Stockpile Stewardship and Management Plans each contain budget dollar figures for 25 years, presented in current dollar values. Our report presents all figures in current, or nominal, dollars, which include projected inflation, unless otherwise noted. Further, all years noted in our report refer to fiscal years, unless otherwise noted. To determine the extent to which NNSA’s budget estimates and plans for modernization activities differed from those in the 2015 nuclear security budget materials, we compared the information in the 2016 materials with the information in the 2015 materials. NNSA’s nuclear security budget materials are composed of two key policy documents that are issued annually: the agency’s budget justification, which contains estimates for the 5-year Future-Years Nuclear Security Program, and the Stockpile Stewardship and Management Plan, which provides budget estimates over the next 25 years. Specifically, we (1) compared differences between the 2016 and 2015 budget materials in the four broad modernization areas—stockpile; infrastructure; research, development, testing, and evaluation; and other weapons activities—and (2) compared differences between the 2016 and 2015 budget materials for specific weapons refurbishment activities and major construction projects. We interviewed knowledgeable officials from NNSA about changes we identified between the 2016 and 2015 budget materials. We also reviewed a third, integrated document on plans for the nuclear deterrent that includes information on the Department of Defense (DOD) and Department of Energy’s (DOE) modernization budget estimates. This annual report that DOD and DOE are required to submit jointly to the relevant Senate and House committees and subcommittees is referred to as the section 1043 report; in our report, we refer to it as the DOD-DOE joint report. We compared the information in the 2016 DOD-DOE joint report with that in the Fiscal Year 2016 Stockpile Stewardship and Management Plan. To determine the extent to which NNSA’s budget materials align with its modernization plans, we compared information on the budget estimates in the 2016 budget materials with the information on modernization plans in the materials as well as the DOD-DOE joint report, reviewed prior GAO reports to provide context for the concerns we identified, and interviewed NNSA officials to obtain further information on changes to modernization plans and discussed any perceived misalignments with them. For weapons refurbishment efforts under way during the 25 years covered by the Fiscal Year 2016 Stockpile Stewardship and Management Plan, we analyzed NNSA’s budget estimates for all those to be conducted over the 25-year period by comparing them against NNSA’s internally developed cost ranges for each LEP. According to DOE officials, for all LEPs besides the W76-1, DOE uses two different approaches to estimate the costs of LEPs. Under the first approach, according to officials, DOE develops specific budget estimates by year through a “bottom-up” process. DOE officials describe this as a detailed approach to developing the LEP budget estimates, which, among other things, integrates resource and schedule information from site participants. Under the second approach, which DOE refers to as a “top-down” process, DOE uses historical LEP cost data and complexity factors to project high and low cost ranges for each LEP distributed over the life of the program using an accepted cost distribution method. Officials noted that the values in these cost ranges reflect idealized funding profiles and do not account for the practical constraints of the programming and budgeting cycle. For the W76-1 LEP, DOE has developed specific budget estimates by year. Because the W76-1 LEP is the basis of DOE’s top-down model, DOE does not develop high and low cost ranges for it. Instead, DOE published the W76-1 LEP estimates in the Fiscal Year 2016 Stockpile Stewardship and Management Plan as a comparison between the Future-Years Nuclear Security Program request and a single LEP model line. For the W76-1 LEP, we compared the budget estimates with the LEP model line. For all LEPs besides the W76-1, we assessed the extent to which the specific bottom-up budget estimates were aligned with the high-low cost ranges developed through the top-down model. Specifically, we examined where the specific budget estimates were under the low end of the cost range predicted by the top-down model. We did this by reviewing charts in the Fiscal Year 2016 Stockpile Stewardship and Management Plan and the underlying data for those charts. In instances where the low cost range exceeded the budget estimates, we followed up with NNSA officials for additional information. To assess the reliability of the data underlying NNSA’s budget estimates, we reviewed the data to identify missing items, outliers, or obvious errors; interviewed NNSA officials knowledgeable about the data; and compared the figures in the congressional budget justification with those in the Fiscal Year 2016 Stockpile Stewardship and Management Plan to assess the extent to which they were consistent. We determined that the data were sufficiently reliable for our purposes, which were to report the total amount of budget estimates and those estimates dedicated to certain programs and budgets and to compare them to last year’s estimates. We conducted this performance audit from May 2015 to March 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The National Nuclear Security Administration (NNSA) has developed budget estimates for its nuclear weapons life extension programs (LEP) and major alterations: the B61-12, the W76-1, the W80-4, the IW-1, the IW-2, the IW-3, and the B61-13 LEPs, as well as for the W88 Alteration 370. The estimates include NNSA’s internally developed high and low cost ranges for each program. The budget estimates appear as bars for each year, while the high and low cost ranges are represented by lines across the figures. The following figures present budget estimates for each LEP and alteration. Similar figures also appear in the Fiscal Year 2016 Stockpile Stewardship and Management Plan. B61-12: The B61 bomb is one of the oldest nuclear weapons in the stockpile. The B61-12 LEP will consolidate and replace the B61-3, -4, -7, and -10 bombs. According to the Fiscal Year 2016 Stockpile Stewardship and Management Plan, this consolidation will enable a reduction in the number of gravity bombs, which is consistent with the objectives of the 2010 Nuclear Posture Review. The first production unit of the B61-12 is planned for 2020; the program is scheduled to end in 2026. In the Fiscal Year 2016 Stockpile Stewardship and Management Plan, NNSA estimates that the B61-12 LEP will require a total of $5.7 billion from 2016 to 2026. See figure 1 for an illustration of budget estimates against projected cost ranges. W76-1: The W76 warhead was first introduced into the stockpile in 1978 and is deployed with the Trident II D5 missile on the Ohio-class nuclear ballistic missile submarines. The W76-1 LEP is intended to extend the original warhead service life and address aging issues, among other things. The first production unit was completed in September 2008, and the program will end in calendar year 2020. In the Fiscal Year 2016 Stockpile Stewardship and Management Plan, NNSA estimates that approximately $847 million will be required for this program from 2016 to 2021. See figure 2 for an illustration of budget estimates against projected cost ranges. W80-4: The W80-4 LEP is intended to provide a warhead for a future long-range standoff missile that will replace the Air Force’s current air- launched cruise missile. The first production unit is planned for 2025, and the program is scheduled to end in 2032. In the Fiscal Year 2016 Stockpile Stewardship and Management Plan, NNSA estimates that the W80-4 LEP will require approximately $8.2 billion from 2016 to 2032. See figure 3 for an illustration of budget estimates against projected cost ranges. W88 Alteration 370: Among other things, the W88 Alteration 370 will replace the arming, fuzing, and firing subsystem for the W88 warhead, which is deployed on the Navy’s Trident II D5 submarine-launched ballistic missile system. In November 2014, the Nuclear Weapons Council decided to replace the conventional high explosive main charge, which led to an increase in costs for the alteration. The first production unit is planned for 2020, and the program is scheduled to end in 2026. In the Fiscal Year 2016 Stockpile Stewardship and Management Plan, NNSA estimates that the program will require a total of $2 billion from 2016 to 2026. See figure 4 for an illustration of budget estimates against projected cost ranges. IW-1: The IW-1, also known as the W78/88-1, is the first ballistic missile warhead LEP in NNSA’s interoperable strategy to transition the stockpile to three interoperable ballistic missile warheads and two air-delivered warheads The first production unit is planned for 2030; the 2016 budget materials do not report an end date for the LEP. In the Fiscal Year 2016 Stockpile Stewardship and Management Plan, NNSA estimates that the program will require a total of $13.4 billion from 2020 to 2040. See figure 5 for an illustration of budget estimates against projected cost ranges. IW-2: The IW-2 is an interoperable warhead intended to replace the W87/88 warhead. The Nuclear Weapons Council has not yet developed a more detailed implementation plan for this LEP. The first production unit is planned for 2034; the Fiscal Year 2016 Stockpile Stewardship and Management Plan does not contain a projected end date. In the Fiscal Year 2016 Stockpile Stewardship and Management Plan, NNSA estimates that the program will require a total of $12.1 billion from 2023 to 2040. See figure 6 for an illustration of budget estimates against projected cost ranges. IW-3: The IW-3 is intended to provide the third interoperable warhead for NNSA’s future strategy for the stockpile. The first production unit is not yet specified, and there is not yet a budgeted end date. In the Fiscal Year 2016 Stockpile Stewardship and Management Plan, NNSA estimates that a total of $6.3 billion will be required for this program from 2030 to 2040. See figure 7 for an illustration of budget estimates against projected cost ranges. B61-13: According to NNSA officials, the B61-13 LEP is intended to replace the B61-12 bomb. The first production unit is not yet specified, and there is not yet a budgeted end date. In the Fiscal Year 2016 Stockpile Stewardship and Management Plan, NNSA estimates that a total of $1.2 billion will be required for this program from 2038 to 2040. See figure 8 for an illustration of budget estimates against projected cost ranges. In addition to the contact named above, William Hoehn (Assistant Director), Antoinette Capaccio, Pamela Davidson, Philip Farah, Bridget Grimes, Carol Henn, Aaron Karty, and Cynthia Norris made key contributions to this report. | Nuclear weapons are an integral part of the nation's defense strategy. Since 1992, the United States has shifted from producing new nuclear weapons to maintaining the stockpile through refurbishment. The 2010 Nuclear Posture Review —which outlines U.S. nuclear policy, strategy, capabilities, and force posture—identified long-term stockpile modernization goals for NNSA that include sustaining a safe, secure, and effective nuclear arsenal and investing in a modern infrastructure. The National Defense Authorization Act for Fiscal Year 2011 included a provision for GAO to report annually on NNSA's nuclear security budget materials. These materials are composed of NNSA's budget request justification and its Stockpile Stewardship and Management Plan , which describes modernization plans and budget estimates for the next 25 years. This report assesses (1) changes in the estimates in the 2016 budget materials from the prior year's materials and (2) the extent to which NNSA's 2016 budget materials align with plans for major modernization efforts. GAO analyzed NNSA's fiscal year 2015 and 2016 nuclear security budget materials, which describe modernization plans and budget estimates for the next 25 years. GAO also interviewed NNSA officials. In the National Nuclear Security Administration's (NNSA) fiscal year 2016 budget materials, the estimates for efforts related to modernizing the nuclear weapons stockpile total $297.6 billion for the next 25 years—an increase of $4.2 billion (1.4 percent) in nominal dollar values (as opposed to constant dollar values) compared with the prior year's budget materials. However, for certain program areas and individual programs, budget estimates changed more significantly than the overall estimates. NNSA's modernization efforts occur in four areas under the Weapons Activities appropriation account: stockpile; infrastructure; research, development, testing, and evaluation; and other weapons activities. For the stockpile area, budget estimates over 25 years increased by 13.2 percent over the nominal values in the Fiscal Year 2015 Stockpile Stewardship and Management Plan . Within the stockpile area, the estimates for life extension programs (LEP), which refurbish nuclear weapons, increased by 19.6 percent compared with the prior year's estimate, in part because of changes in the scope and schedule for some programs. In contrast, estimates for the other weapon activities area decreased by 18.1 percent, mainly because NNSA shifted two counterterrorism programs out of the Weapons Activities budget and into NNSA's separate Defense Nuclear Nonproliferation budget. The estimates in NNSA's 2016 nuclear security budget materials may not align with all elements of modernization plans for several reasons. First, the Fiscal Year 2016 Stockpile Stewardship and Management Plan includes estimates for 2021 through 2025 that are $4.4 billion higher than the same time period in a set of out-year projections for funding levels that were included in a joint report by the Department of Defense and Department of Energy. NNSA noted this issue in the 2016 plan and stated that it will need to be addressed as part of fiscal year 2017 programming. In addition, in some years, NNSA's budget estimates for certain weapons refurbishment efforts are below the low point of the programs' internally developed cost ranges. For example, the W88 Alteration 370 budget estimate of $218 million for 2020 was below the low end of the internal program cost range of $247 million. NNSA officials stated that the total estimates for this program are above the total of the midpoint cost estimates for 2016 through 2020 and that funding for 2016 to 2019 is fungible and could be carried over to cover any potential shortfall in 2020. GAO also identified instances where certain modernization costs were not included in the estimates or may be underestimated, or where budget estimates for some efforts could increase due to their dependency on successful execution of other NNSA programs. For example, an NNSA official said that budget estimates for the IW-1 LEP—which is NNSA's first interoperable ballistic missile warhead LEP—are predicated on NNSA successfully modernizing its plutonium pit production capacity. This official stated that if there are delays in modernizing this capacity, the IW-1 LEP could bear greater costs than currently estimated. In August 2015, GAO recommended that NNSA provide more transparency with regard to shortfalls in its budget materials. NNSA agreed and said that it plans to implement this recommendation starting in its 2017 budget supporting documents. GAO is not making any new recommendations in this report. In response to GAO's draft report, NNSA provided technical comments, which were incorporated as appropriate. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The M+C program, established by the Balanced Budget Act of 1997 (BBA), grew out of Medicare’s previous managed care option known as the risk- contract program. BBA included provisions designed to expand the number and type of health plan choices available to Medicare beneficiaries. BBA also modified the method used to set payment rates, both to encourage MCOs to serve new geographic areas and to pay them more appropriately for the beneficiaries they enrolled. Since the first of BBA’s payment reforms were implemented, MCOs have terminated some of their Medicare contracts and reduced the geographic areas served under other contracts. As a result, many beneficiaries previously enrolled in M+C plans have had to switch plans or return to the FFS program. In 1999, and again in 2000, Congress passed legislation that increased payment rates in an effort to make participation in the M+C program more attractive for MCOs. As of September 2001, about 5.6 million people—or approximately 14 percent of Medicare’s 40 million beneficiaries—were enrolled in M+C plans. Overall, approximately two-thirds of all beneficiaries lived in areas served by at least one MCO, but M+C plan availability varied among locations. Most beneficiaries living in urban areas, but less than one- quarter of beneficiaries living in rural areas, had access to one or more M+C plans. MCOs receive a fixed monthly payment for each beneficiary enrolled in their health plans, regardless of the actual cost of an individual enrollee’s care. Because Medicare establishes separate payment rates for each county, the amount that Medicare pays for a specific beneficiary depends, in part, on the beneficiary’s county of residence. The beneficiary’s demographic characteristics and an indicator of his or her health status also affect the monthly payment. These adjustments are made to the county rate so that the payment amount better reflects the expected health care costs of the specific beneficiary. Benefit packages—in terms of premiums, required cost sharing, and covered services—vary among M+C plans. All plans must cover the services available in the FFS program, with the exception of Medicare’s hospice benefit. MCOs may include additional benefits in their health plans, such as coverage for routine physical examinations and outpatient prescription drugs. Every July, as part of the annual contracting process, MCOs must estimate how much it will cost them to provide Medicare- covered benefits during the next calendar year. These estimated costs, which may include the organization’s normal profits, are supposed to reflect the premiums that the MCOs would charge to commercial and other customers, adjusted to reflect differences in Medicare’s covered benefits and beneficiaries’ expected use of services. For each M+C plan they intend to offer, MCOs submit a document, known as an adjusted community rate proposal (ACRP), that contains detailed estimates of the plan’s expected costs and revenues associated with providing covered benefits, and a description of the plan benefit package. CMS reviews each ACRP and compares the estimated costs to the plan’s projected Medicare revenues. If the estimated costs are less than the projected Medicare revenues, the MCO must either use the difference to cover additional benefits or contribute to a benefit stabilization fund that it can draw on to augment the plan’s revenues in future years. The cost of any additional benefits or stabilization fund contributions must also be detailed in the ACRP. Before 1998, Medicare’s managed care payment rate in each county was set equal to 95 percent of FFS per capita spending in that county, adjusted for certain demographic characteristics of the beneficiaries living in the county to account for differences in service use associated with those characteristics. These county payment rates—reflecting the underlying pattern of FFS spending—ranged from $221 to $767 PMPM in 1997. This variation in payment rates may have contributed, along with many other factors, to the uneven availability of Medicare managed care options across the country. Our work and research by others has found that the methodology used to adjust payments for the expected service use of enrolled beneficiaries did not adequately reflect the above average health status and below average expected health costs of typical M+C enrollees. Consequently, Medicare paid MCOs substantially more than it likely would have spent if beneficiaries enrolled in M+C plans had instead received services in the FFS program. Beginning in 1998, BBA substantially modified the method used to set county payment rates for M+C plans. Some of the modifications were designed to reduce excess payments, while others were designed for other purposes—such as increasing program participation of MCOs in geographic areas that historically had low payment rates. Specifically, the law required that each county’s payment rate equal the highest of three rates: a minimum amount, or “floor” (set at $367 in 1998 and increased each year); a minimum increase (2 percent) over the previous year’s payment rate; or a blend of historical FFS spending in a county and national average costs adjusted for local price levels. BBA required, for five years, that the annual payment rate updates to the floor and blend rates be lower than the increases in national FFS per capita spending. The law also mandated that by 2000, M+C payments be adjusted to reflect the health status of plans’ enrollees. In the years following the implementation of BBA’s payment and other reforms, MCOs terminated approximately 160 Medicare contracts and reduced the size of the geographic areas served under many of the contracts they renewed. Approximately 1.6 million beneficiaries had to switch to a different M+C plan or return to the FFS program because of these withdrawals. CMS expects that an additional 536,000 beneficiaries will be affected by withdrawals that will occur at the end of 2001. Most of the affected beneficiaries live in areas where other M+C plans are available, but approximately 38,000 beneficiaries will no longer have access to a M+C plan and will have to return to the FFS program. Managed care industry representatives have attributed the withdrawals to BBA’s payment reforms and new administrative requirements for MCOs. The representatives have stated that the payment reforms and the cost of meeting the new administrative requirements make it difficult for MCOs to offer benefit packages that are attractive to beneficiaries. To help maintain and expand beneficiary access to M+C plans, Congress twice revised the M+C program and modified BBA’s payment reforms. In 1999, Congress passed BBRA, which provided for new-entry bonus payments to MCOs that contracted with Medicare to serve areas where no M+C plans were being offered. The law also affected payment rates by modifying implementation of certain BBA payment reforms. In December 2000, Congress passed BIPA, which increased payment rates in all counties in March 2001. Before BIPA took effect, the floor rate was $415 PMPM in 2001. BIPA created a new rate category for counties located in metropolitan areas of at least 250,000 people and established $525 as the floor rate for those counties in 2001; for all other counties, the law increased the floor rate to $475. BIPA also mandated that 2001 county payment rates exceed 2000 rates by at least 3 percent, a 1 percentage point increase in the minimum annual update specified in BBA. The increases in county rates that resulted from these changes ranged from about $5 to $110 PMPM (or 1 to 27 percent). The legislation also extended BBRA’s new-entry bonus payments to counties where all existing Medicare MCOs had indicated they would withdraw at the end of 2001. Beginning in March 2001, as a result of BIPA, the average payment rate increase for M+C plans ranged from less than $1 to more than $100 PMPM. The amount of the increase depended on the specific counties each plan served and its expected enrollment in each county. Half of the M+C plans received overall payment rate increases of less than $10 PMPM, while the other half received $10 PMPM or more. BIPA provided that MCOs could use the additional money for each plan toward any combination of the following options: Improve the benefit package by Reducing beneficiary premiums, Reducing beneficiary cost sharing, Adding benefits, Enhancing benefits; Contribute to a benefit stabilization fund; or Stabilize or enhance beneficiary access to providers. BIPA required MCOs to submit revised contract proposals to cover that portion of the 2001 contract year—March through December—when the increased payment rates would be in effect and explain how they would use the additional money. The schedule for the submission and approval of the revised contracts, however, was compressed compared to the typical schedule. HCFA had originally announced the 2001 payment rates in March 2000. MCOs were not required to submit their 2001 contract proposals until July 2000—four months after the rates were announced. HCFA then spent two months reviewing and approving the contracts. Under BIPA’s time frames, the process—HCFA’ development and announcement of the new county rates, MCOs’ preparation and submission of contract proposals, and HCFA’s review and approval of those proposals—happened within six weeks. MCOs reported that some or all of the BIPA payment increase would be used to stabilize or enhance beneficiary access to providers for the majority of their M+C plans. Some of these MCOs stated that they would increase provider payments or contract with additional providers, but others—consistent with HCFA guidelines—may have revised their cost projections and reported that the additional money would be used to help offset projected cost increases. MCOs used additional money to improve their benefit packages for one-fourth of their plans—primarily by reducing the monthly premiums they charged to beneficiaries. MCOs put additional money into a benefit stabilization fund for a few of their plans. For about 83 percent of the 543 M+C plans, MCOs reported that some or all of the additional money authorized by BIPA would be used to stabilize or enhance beneficiary access to providers (see fig. 1). In about 63 percent of M+C plans, the entire BIPA payment increase was slated for this purpose. In about 20 percent of M+C plans, MCOs reported that they would also improve the benefit packages or contribute to a benefit stabilization fund. In HCFA’s instructions for filing revised ACRPs, the agency stated that MCOs could increase provider payment rates to help stabilize beneficiary access to providers. Alternatively, MCOs could contract with additional providers to enhance beneficiary access to providers. HCFA also stated that MCOs could revise their previous cost projections—for example, by updating assumptions regarding enrollees’ use of services, unit costs, or composition of enrollees—if the revisions would stabilize or enhance beneficiary access to providers. MCOs could then use any projected cost increases to offset the BIPA payment increase and thus reduce the amount that they might otherwise spend on increases in provider payments, benefit improvements, or contributions to the stabilization fund. MCOs were required to submit justifications of any projected changes in costs along with their revised ACRPs. HCFA did not review these justifications, but they are potentially subject to audits. In some instances in which MCOs stated that the additional money would be used to improve access to providers, the ACRP justifications clearly stated that the MCO intended to contract with additional providers or increase provider payment rates. Some MCOs that increased provider payment rates explained that that they did so voluntarily to help retain existing providers or expand their provider networks. Other MCOs stated that contractual arrangements required them to increase provider payment rates because those rates were specified as a percentage of Medicare’s payment. MCOs used some or all of the BIPA payment increase to improve the benefit packages in about 29 percent of plans. For these plans, MCOs reduced beneficiary premiums or cost sharing, added new benefits, or enhanced coverage for existing benefits. MCOs used the entire payment increase for benefit package improvements in approximately 12 percent of plans and used a portion of the payment increase for this purpose in another 16 percent of plans. Most plans (86 percent) did not have any changes in their premiums as a result of BIPA. Premiums were reduced in 12 percent of plans and eliminated entirely in 2 percent. The maximum premium fell from $250 to $200 PMPM while the lowest premium required remained unchanged at $4 PMPM among plans that charged a premium. The average premium fell by $2 overall from $25 to $23 PMPM. Approximately 1.4 million beneficiaries (25 percent) enrolled in M+C plans received improved benefits as a result of BIPA. The typical improvement, affecting more than 900,000 beneficiaries (16.4 percent), was a premium reduction (see fig. 2). For these beneficiaries, the median premium reduction was $10 per month, although some premiums dropped by as much as $59 while others fell by only $2. More than 100,000 of these beneficiaries—about 2 percent of total M+C enrollment—were enrolled in plans in which premiums were eliminated. Previously, these beneficiaries had paid premiums that ranged from $10 to $59 per month. The second most frequent benefit package improvement was a reduction in required cost sharing, which affected about 290,000 beneficiaries (5.2 percent of total M+C enrollment). Relatively few M+C enrollees received enhanced service benefits (105,000 or 1.9 percent) or additional service benefits (72,000 or 1.3 percent) as a result of BIPA. Many beneficiaries who received enhanced or additional service benefits saw improvements in their coverage for prescription drugs. Approximately 50,000 beneficiaries were enrolled in M+C plans in which MCOs enhanced existing drug coverage. Another 53,000 beneficiaries were enrolled in M+C plans in which the MCO added drug coverage as a new benefit. Some MCOs also added or improved coverage for hearing aids, preventive dental services, and a variety of other services. MCOs put some or all of their additional money into an escrow-like account, known as a benefit stabilization fund, for about 12 percent of their plans (see fig. 1). An MCO that contributes a portion of a plan’s Medicare payments to such a fund can draw on its accumulated contributions to help finance the cost of that plan’s benefits in future years. By drawing on its stabilization fund, an MCO may avoid having to increase beneficiary premiums or reduce coverage for non-Medicare benefits in years when it expects to retain less of Medicare’s payment after paying for Medicare-covered benefits. For less than 2 percent of M+C plans, MCOs put all of the additional money into a benefit stabilization fund. These amounts ranged from about $5 to $37 PMPM. For approximately 10 percent of their plans, MCOs applied some (2 percent to 78 percent) of the payment increase to a benefit stabilization fund for the plan. Among these plans, the median contribution was 34 percent of the BIPA payment increase. The dollar contributions for these plans ranged from less than $1 PMPM to $55 PMPM with a median contribution of $9 PMPM. MCOs have always had the option of placing a portion of a plan’s Medicare payments into a benefit stabilization fund. Historically, however, MCOs have not used this option but instead used the full payment amount to cover costs in the current year. An industry trade association has suggested that some MCOs may have used the benefit stabilization funds in 2001 because of the short time frames associated with the implementation of the BIPA payment changes. According to the association, some MCOs may have decided they had too little time to renegotiate provider contracts or to change their health plans’ benefit packages. However, short time frames may not have been the only factor because some MCOs that offer multiple health plans in the same geographic area used the stabilization funds for some of their health plans but not others. BIPA had little effect on the number of beneficiaries with access to at least one M+C plan in 2001. Seven MCOs, offering a total of 12 M+C plans, either reentered counties they had previously dropped from their service areas or expanded into counties they had not previously served. However, all but 21,000 of the approximately 750,000 beneficiaries living in the affected counties already had access to a M+C plan. All of the counties that MCOs reentered, but only 2 of the counties into which MCOs expanded, received above average payment rate increases. Interviews with MCO representatives suggest that BIPA influenced MCOs’ reentry but not expansion decisions. Following BIPA’s enactment, seven MCOs contracted to serve additional geographic areas (see table 1). Three of these MCOs reversed earlier decisions and reentered counties they had dropped from their 2001 service areas. Three others expanded their service areas into counties that they previously had not served. The seventh MCO both reentered previously served counties and expanded into new counties. In addition to these 7 MCOs, 15 other MCOs submitted applications to expand their geographic service areas or begin service in new areas, but these applications had not been approved as of October 2001. The MCOs’ reentry and expansion decisions did not substantially increase the number of beneficiaries who had access to a M+C plan. Nearly all (97 percent) of the approximately 750,000 beneficiaries living in affected counties already had access to at least one M+C plan in 2001. For these beneficiaries, the reentry and expansion decisions increased the number of M+C plans from which they could choose. Blue Cross and Blue Shield of Massachusetts expanded into additional portions of a county that it already partially served. The expansion affected less than 21,000 beneficiaries. As of September 2001, the seven MCOs that had contracted to serve additional geographic areas had enrolled a total of about 12,000 beneficiaries. About half of these beneficiaries (5,968) were enrolled in St. Joseph Healthcare. This MCO had intended to discontinue service in four counties in New Mexico as of January 2001. Following BIPA, the MCO reversed its earlier decision and proposed including the four counties in its service area. St. Joseph Healthcare obtained permission from HCFA to serve the four counties during January and February. Thus, St. Joseph Healthcare operated without a disruption in service. The other three MCOs that reentered previously served counties had to disenroll their members in those counties at the end of December 2000 and could not reenroll them until March 2001 when the BIPA payment increase went into effect. Many of the disenrolled beneficiaries did not return to their original plans. As of September 2001, these three MCOs had enrolled about 2,200 beneficiaries in the affected counties—substantially less than their combined enrollment level at the end of 2000. The four MCOs that expanded their service areas had not enrolled many beneficiaries as of September 2001. However, one of the four MCOs had only begun service in its expansion area during September. The four MCOs’ aggregate enrollment increased by approximately 460 beneficiaries in the counties affected by the expansions. BIPA payment rate increases were greater in all of the counties that MCOs reentered and in two of the counties where MCOs expanded, compared to the overall average payment rate increase of $16 PMPM (about 3 percent) in counties with M+C enrollment. In the nine counties that MCOs reentered but where no MCO expanded, the payment rate increases ranged from $56 PMPM (13 percent) to $110 PMPM (27 percent) (see table 2). Payment rate increases were generally lower in the four counties where MCOs expanded but no MCO reentered, and ranged from $8 (1 percent) to $54 (11 percent). The smallest payment increase occurred in New York County, NY where the pre-BIPA 2001 payment rate was $772— substantially above the national average county payment rate of $463. Two counties in New Mexico—Sandoval and Torrance—were affected by one MCO’s reentry and another MCO’s expansion. The payment rate increased by $110 (27 percent) in Sandoval County and $60 (15 percent) in Torrance County—increases similar to those in the reentered-only counties. According to representatives of the MCOs that reentered previously served counties, the BIPA payment increase was primarily responsible for their decision to return to those counties. Representatives of the MCO that both reentered counties and expanded into new ones also stated that the higher payments motivated their decision to increase their plan’s service area. In contrast, representatives of the three MCOs that expanded their service areas said that the additional payments authorized by BIPA did not influence their decisions at all. These representatives generally said that their MCOs had decided to expand before BIPA passed or that expansion was a good business decision regardless of the payment increase. In the short run, BIPA has had a limited effect on M+C plans’ benefit packages. For most M+C plans, MCOs reported that the additional money resulting from BIPA would be used to maintain or improve beneficiary access to providers. MCOs used the additional money to improve their plans’ benefit packages—most often by reducing premiums—or to contribute to benefit stabilization funds for less than half of all their plans. BIPA increased the number of M+C plans available to some beneficiaries, but it largely did not extend choice to beneficiaries who were not previously served by MCOs. Although seven MCOs increased the size of their health plans’ service areas, approximately 97 percent of the beneficiaries living in the 15 affected counties already had access to at least one M+C plan. However, the longer-term effects of BIPA may differ from the effects in 2001. MCOs had only a few weeks to react to the legislation and decide how they would use the increased payments. Over time, new county payment rates established by BIPA may have a greater influence on the geographic areas that plans serve and the benefits they offer. In commenting on a draft of this report, CMS generally agreed with our results. CMS noted that MCOs may not have had sufficient time to react to the legislation and reconsider and reverse carefully considered financial decisions, or to rebuild provider networks. Technical comments were incorporated as appropriate. The full text of CMS’ comments appears in appendix I. As we agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then send copies of this report to the Administrator of CMS and to interested parties upon request. If you or your staffs have any questions about this report, please call me at (202) 512-7119. This report was prepared under the direction of James Cosgrove, Assistant Director, by Zachary Gaumer, Jim Hahn, and Jennifer Podulka. | The number of contracts under Medicare's managed care program--Medicare+Choice (M+C)--fell from 340 to 180 between 1998 and 2001. The reduction reflected decisions by some managed care organizations (MCOs) to terminate selected contracts or to discontinue service in some covered areas. Although nearly all MCOs renewed at least some of their Medicare contracts over this period, many reduced the geographic areas served. As a result, 1.6 million beneficiaries had to switch MCOs or return to Medicare's traditional fee-for-service program. Other MCOs plan either to terminate or reduce their participation in M+C at the end of 2001. Concerned about MCO withdrawals, Congress sought to make participation in the program more attractive. As a result of the Benefits Improvement and Protection Act of 2000, aggregate Medicare+Choice payments in 2001 are estimated to have increased by $1 billion. The act permitted three basic uses for the higher payment. MCOs could (1) improve their health plans' benefit packages, (2) set aside money for future years in a benefit stabilization fund, or (3) stabilize or enhance beneficiary access to providers. Most MCOs reported that additional money would be used to stabilize or enhance beneficiary access to providers. A minority of MCOs reported that the money would go toward benefit improvements or be placed in a benefit stabilization fund. In 83 percent of M+C plans, MCOs stated that some or all of the additional money would be used to stabilize or enhance beneficiary access. The payment increases had little effect on the availability of M+C plans during 2001. Following passage of the act, three MCOs reentered counties they had dropped from their service areas, three MCOs expanded into counties that they previously had not served, and one MCO both reentered previously served counties and expanded into new ones. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The Federal Aviation Administration’s (FAA) primary mission is to ensure safe and efficient air travel throughout the United States. FAA’s ability to fulfill this mission depends on the adequacy and reliability of the nation’s air traffic control (ATC) system, which FAA is responsible for managing and maintaining. Growth in air traffic operations and deteriorating equipment have strained the current ATC system and FAA’s ability to sustain its exemplary safety record. These factors have increased the urgency for FAA to modernize ATC equipment. FAA began its program to modernize the ATC system in the early 1980s. The program included the acquisition of new radars and automated data processing, navigation, and communications equipment. As of March 1996, FAA estimated that from 1982 through 2003, the total cost of this modernization program will be about $35 billion. Through fiscal year 1996, the Congress will have provided FAA with approximately $22 billion of the $35 billion. We have been involved in evaluating FAA’s acquisitions of major systems since FAA began its ATC modernization program. We have chronicled how FAA’s modernization program has experienced substantial cost overruns, lengthy schedule delays, and performance shortfalls. Our reviews have traditionally focused on the technical difficulties and managerial weaknesses that caused these problems. Until undertaking this review, we had examined the role of an underlying managerial factor—organizational culture—in acquisition management at other federal agencies but not at FAA. The most vivid example of FAA’s cost, schedule, and performance problems was FAA’s effort to replace existing display and computer systems in ATC facilities across the nation. The Advanced Automation System (AAS), the long-time centerpiece of the modernization program and the most costly project, was restructured in 1994 after costs tripled to an estimated $7.6 billion from the 1983 estimate of $2.5 billion and after the planned implementation of key components was up to 8 years behind the original 1983 schedule. The critical Initial Sector Suites System segment of the AAS project, intended to replace controllers’ existing work stations at en-route centersand provide controllers with new hardware and software, including radar displays, was particularly troublesome. Before scaling back this segment, FAA was attempting to address several serious technical problems, such as (1) ensuring that 210 separate work stations would communicate in a stable network, (2) reducing the need to revise each software code (on average, every line of software needed to be rewritten once), and (3) converting a system for communicating flight information on printed paper strips to an electronic system. Unplanned cost increases have characterized many other FAA acquisitions. Per-unit costs increased substantially for eight of the nine key projects that we have tracked in our annual status reports on the ATC modernization program. (Table 1.1 shows the percentage change in unit costs for the nine projects.) Since beginning the ATC modernization program in the early 1980s, FAA has completed smaller projects, but efforts to develop and implement most major acquisitions—such as replacing automated systems and communications equipment—have suffered extensive delays. As of March 1996, 74 projects totaling $5.1 billion—only about 15 percent of the modernization program’s overall cost—were completed, and 147 projects remained active. For the nine acquisitions cited above, delays have averaged almost 5 years per project from original estimates. (Table 1.2 shows the schedule delays experienced by these nine projects.) Performance shortfalls have also affected many projects and have caused rework, redesign, and even cancellation of projects. The following three key projects that we have reviewed are examples. The Automated Surface Observing System (ASOS) is designed to (1) measure wind speed, temperature, cloud height, visibility, and the types and amounts of precipitation near airport runways and (2) send computer-generated information to pilots. Although FAA had procured more than 350 ASOS units by May 1995, few had been commissioned by the end of that year because of technical difficulties. For example, we reported in April 1995 that six of the eight sensors in the system did not meet key contract specifications for accuracy or performance. Furthermore, the system’s overall reliability during testing was only about one-half or less of the required levels. The Air Route Surveillance Radar-4 (ARSR-4) is designed to track aircraft and weather. Persistent technical problems—most recently, difficulties in developing software and integrating this radar with other ATC systems—have delayed its implementation for years. The Mode Select (Mode S) radar is designed to (1) identify, locate, and track aircraft by using radar signals to obtain information from up to 700 individual aircraft at a time and (2) provide users with a communications channel between aircraft and ground facilities. Although FAA awarded a production contract in 1984, technical difficulties prevented FAA from fielding a full-performance radar until this past year. Our work over the years has pointed to technical difficulties and weaknesses in FAA’s management of the acquisition process as primary causes for FAA’s recurring cost, schedule, and performance problems. In terms of technical difficulties, FAA has underestimated the complexity of developing systems, especially highly ambitious ones that involved extensive software development, such as AAS. FAA’s difficulties in developing software have caused cost overruns and schedule delays for numerous acquisitions of major systems. We have also reported recurring weaknesses in FAA’s management of the acquisition process. FAA did not historically manage its acquisitions of major systems in accordance with the business-like principles embodied in Office of Management and Budget Circular A-109 and FAA’s own acquisition policies. For example, FAA did not analyze its mission needs and performed flawed or limited analyses of alternative approaches for achieving those needs. FAA also did not perform realistic testing before proceeding into full production of systems and found out later that the systems did not meet the agency’s specifications. Other managerial weaknesses include inadequate oversight of contractors’ performance, difficulties in resolving issues related to requirements for FAA’s various systems, and problems with securing sites to install equipment. Organizational culture is one managerial factor we have examined in reviews of acquisition management at other federal agencies but not at FAA. We have defined organizational culture as the underlying assumptions, beliefs, values, attitudes, and expectations shared by an organization’s members that affect their behavior and the behavior of the organization as a whole. In our 1992 report on the acquisition of weapon systems at the Department of Defense, we found that the Department’s organizational culture contributed to cost increases, schedule delays, and performance shortfalls. In our view, individuals acted in response to incentives related to their careers, jobs, program support, organizational influence, and budget levels. Collectively, these incentives created an environment that encouraged “selling” and starting new programs and pushing existing programs ahead despite development, production, and implementation problems. In our 1992 report on the Department of Energy, we concluded that the Department’s contract management problems would require a change in its business philosophy and that its efforts to instill a new organizational culture were an acknowledgement of the systemic nature of the problems.Similarly, we reported that the National Aeronautics and Space Administration would have to change its organizational culture in order for its contract management improvement efforts to succeed. Since then, our preliminary work in evaluating the implementation of the Government Performance and Results Act of 1993 has shown that effective implementation of this act will require fundamental changes in the culture of government management—changing management’s focus from what federal employees are doing to what they are accomplishing. Organizational theory and behavioral science describe an interdependent relationship between employees’ beliefs, values, and attitudes and their individual and collective behaviors. Moreover, these beliefs, values, attitudes, and behaviors do not operate in a vacuum but are affected by forces both within and outside of an organization. Internal forces include policies and procedures, an organization’s structure and incentive systems, and leadership exercised by top management. External forces include the needs of customers and, in the case of government agencies, congressional committees and Members of Congress. Organizational theory and research show that an organization’s culture is more constructive when employees’ underlying values, attitudes, and beliefs cause individuals and the organization as a whole to behave more often in ways that have desirable results—both for the organization and its customers. Employees in these organizations demonstrate a stronger commitment in the following four areas: mission focus, accountability, coordination, and adaptability. Mission focus refers to the employees’ pursuit of goals that define the best course of action for an organization. An agency’s mission provides the agency with purpose and meaning and promotes short- and long-term commitment by its employees. In a more constructive culture, employees are more likely to think ahead and plan, emphasize quality over quantity, and subordinate their own needs to the agency’s overall mission. Accountability refers to the value an organization places on involvement, participation, and ownership among its members. A greater sense of commitment to the organization fosters the employees’ willingness to be held accountable for decisions and actions. In a more constructive culture, employees are more likely to take responsibility and work to achieve self-set goals, give positive rewards to others, and help others to think for themselves. Coordination refers to the consistency of behavior and the sharing of beliefs and values by individuals and groups within an organization. Such consistency facilitates the exchange of information and fosters coordinated efforts. In a more constructive culture, employees are more likely to involve others in decisions affecting them, openly share information, resolve differences collaboratively, cooperate with others in the organization, and pursue common purposes. Adaptability refers to the employees’ capacity to respond positively to changing demands and opportunities posed from within and outside the organization. Adaptability enables an organization to adopt new behaviors and processes (e.g., in response to emerging technologies and the changing needs of its customers). In a more constructive culture, employees are more likely to resist conformity, think in unique and independent ways, explore alternatives before acting, learn from mistakes, and be receptive to change. When an organization and its employees demonstrate a strong, balanced commitment in these four areas, research shows that the employees are more likely to be satisfied and the organization will perform better. Conversely, an organization is less effective when its employees, both individually and collectively, are less focused on the agency’s overall goals, are held less accountable, coordinate their actions less effectively, and are more resistant to change. In light of FAA’s persistent acquisition problems and our work at other federal agencies that highlighted a need to change organizational culture, the Chairman, Subcommittee on Transportation and Related Agencies, House Committee on Appropriations, asked us to examine FAA’s management of its ATC modernization program to (1) determine whether FAA’s organizational culture has contributed to the agency’s continuing cost, schedule, and technical problems and (2) identify steps that FAA could take to improve its acquisition management through changing its organizational culture if that is a contributing factor. To accomplish the first objective, we reviewed reports focused specifically on FAA’s acquisitions as well as selected studies and research on organizational culture. We drew upon analyses of FAA by other organizations and analyzed the results of FAA employee surveys. We also discussed these analyses with employees involved in acquisitions, including members of the integrated product teams, and with other FAA acquisition stakeholders. We used common theories of research on organizational culture to link the behaviors of FAA employees to long-standing problems in its acquisition process. To document these problems, in reviewing our past reports and testimonies on FAA’s acquisition of ATC systems we concentrated on reports that had been issued since FAA announced its ATC modernization program in 1981. A detailed description of key studies and employee surveys is provided in appendix I. The body of research on organizational culture is extensive. Theories describe the behaviors and problems promoted by different types of organizational cultures and the elements that are essential for organizational performance and effectiveness. We primarily used the organizational research results of Dr. Daniel R. Denison, professor at the University of Michigan’s School of Business Administration; Dr. Robert A. Cooke, consultant for Human Synergistics/Center for Applied Research, Inc.; and Dr. Joseph Coffee, Director of National Education Programs at the Department of Treasury’s Bureau of Alcohol, Tobacco, and Firearms. These studies were particularly useful because they provided us with a framework for assessing FAA’s culture. A description of these studies is presented in appendix II. To achieve the second objective, we reviewed a wide range of ways to manage organizational change, including approaches for reengineering and applying best practices, that were explained in our past reports and testimonies and approaches promoted by (1) private consulting firms, such as Ernst & Young and Coopers & Lybrand; (2) individual researchers and writers on organizational culture; and (3) national management organizations, such as the Federal Quality Institute (FQI), the National Academy of Public Administration, the Defense Department’s Systems Management College, the Association for Quality and Participation, the American Society for Quality Control, and the National Performance Review. We then developed a strategy for successful cultural change by synthesizing common components of major studies and asked a variety of individuals involved in federal management issues and research on organizational culture and theory to review and comment on our strategy. We then compared FAA’s reform effort for changing its organizational culture with our strategy for managing organizational change. Specifically, we reviewed FAA’s effort to determine if it contains the components that are essential for successful change. A list of the individuals we contacted is provided in appendix III. We conducted audit work from August 1995 through June 1996 in accordance with generally accepted government auditing standards. FAA’s organizational culture has been an underlying cause of the persistent cost overruns, schedule delays, and performance shortfalls in the agency’s acquisitions of major ATC systems. Weaknesses in ATC acquisitions stem from recurring shortcomings in the agency’s mission focus, accountability, internal coordination, and adaptability. Multiple forces within an organization—such as its policies, processes, structure, incentive systems, and leadership exercised by top management—affect employees’ beliefs, values, attitudes, and behaviors. Each section in this chapter cites various studies and the results of FAA’s employee surveys to illustrate the effects of these internal forces. While the complexity of the interrelationships among these internal forces as well as their interdependence with external forces allows for a variety of interpretations, our analysis reflects what we found to be common themes in the information sources available to us. Ultimately, the goal of any acquisition program is to acquire only essential equipment and field it within agreed-to cost, schedule, and performance parameters. In organizations with more constructive cultures, employees are more customer-focused and more actively pursue goals that define the best course of action for the organization. The effectiveness of FAA’s management of the acquisition process was reduced by employees in the various divisions who did not focus on the agency’s mission to acquire ATC equipment or consider the long-term, agencywide effects of their decisions and actions. Program officials took such actions as establishing unrealistic cost and schedule estimates and rushing acquisitions prematurely into the production phase. Studies and surveys indicate that these actions were driven by organizational incentives that did not support a focus on FAA’s mission. In reviewing problematic acquisitions, we found that FAA officials acted in ways that did not reflect a strong commitment to the agency’s acquisition mission. Over the years, program officials did not perform mission needs analyses; set unrealistic program cost and schedule estimates; suppressed bad news; and began system production before completing development, testing, and evaluation. Although enabling projects to get started and proceed with minimal interruption, these actions did not foster the agency’s mission of undertaking only essential ATC acquisitions and completing them within budget and on schedule. Program officials pushed ahead with acquisitions without demonstrating the importance of those acquisitions to achieving the agency’s mission. In 1993, we reported that many of the mission need statements we examined—documents that identified the need for FAA to invest an additional $5 billion to fix deficiencies in the ATC system—were not based on the results of any documented mission analysis. Despite the lack of substantial support for these acquisitions, FAA’s top management approved the statements. As a result, FAA has acquired systems that do not meet the agency’s needs. For example, as we noted in that report, FAA spent $46 million on the Real Time Weather Processor to provide controllers with current accurate weather information. However, the new equipment operated as much as six times more slowly than the existing system, and in 1991, FAA suspended this program indefinitely and began to redefine controllers’ needs. Program officials established unrealistic schedule estimates. The result was “unexpected” schedule delays. For example, according to our 1989 report on the Voice Switching and Control System (VSCS) project, FAA’s project schedule was more optimistic than that of the system engineering and integration contractor who was hired to provide technical and programmatic support to FAA in managing the modernization program.FAA officials explained that they preferred their schedule over the contractor’s whose “safe” dates did not require as much effort to meet. The contractor, however, said that FAA’s schedule was unrealistic because it did not allow any extra time to absorb unanticipated difficulties. By 1991, FAA’s estimated date to implement the VSCS at the first-site had slipped from May 1992 to June 1994. Program officials also established unrealistic cost estimates. The total estimated cost of the AAS project tripled from the original estimate of $2.5 billion to $7.6 billion. On a per-unit basis, the estimated cost of the VSCS project increased from the original estimate of about $10 million to about $63 million or an increase of 511 percent; the estimated cost of the Integrated Terminal Weather System project increased from about $3 million to almost $7 million or an increase of 129 percent. The magnitude of these increases indicates that FAA managers were not being realistic in estimating the costs of various ATC systems. Program officials have suppressed bad news. For example, officials managing the ARSR-4 project reported in 1989 that the first implementation of this radar would occur in September 1992. Since 1989, despite their consistent indications that the radar was almost operational, they reported delays in 5 of the following 6 years. In 1995, program officials said the radar system would be up and running in September 1995; however, the first ARSR-4 radar was operational in April 1996. In recent years, the reasons for schedule slippages cited by program officials included software errors that surfaced while integrating software with hardware, production delays, problems with preparing sites, and integration problems between ARSR-4 radars and other ATC systems. While a certain level of technical problems in implementing a complex radar system like ARSR-4 is normal, the consistent pattern of reporting that this system was almost ready, followed by annual schedule delays, indicates that program officials were not disclosing the full extent of difficulties they encountered. FAA officials have rushed into production of ATC systems. Over the years, cost, schedule, and performance problems have resulted from excessive concurrency—beginning system production before completing development, testing, or evaluation programs. FAA has proceeded with producing numerous systems, including the Microwave Landing System (MLS), Mode S radar, and Oceanic Display and Planning System (ODAPS), before their critical performance requirements had been met. The decision to proceed into the production phase of these projects proved to be a mistake. After years of delays, the MLS contractors did not meet established performance requirements. As of May 1995, the ODAPS contractor had not met a key operational requirement—11 years after the contract was awarded. Although FAA awarded a production contract for Mode S radar in 1984, the agency implemented its first full-performance Mode S radar in February 1995. Employees at all levels have described FAA’s shortcomings in mission focus. Furthermore, internal and external reviews of FAA’s ATC acquisitions show that incentives in its acquisition process did not promote management decisions and program outcomes that reflected this mission focus. According to the current FAA Administrator and his Deputy, “the FAA needs long-haul piloting, but it’s been getting short-hop management.” Similarly, an analysis of responses to a 1993 FAA survey of acquisition employees concluded that they believed they must devote considerable energy to organizational survival instead of using that energy to be proactive and focused on accomplishing the agency’s mission. In a 1995 survey, these employees continued to indicate their focus on survival, rather than mission accomplishment, in responses, such as the following; A majority of the respondents (62 percent) agreed that employees are often hesitant to say what they really think for fear of retaliation. More than half (53 percent) disagreed that management supports employees who raise difficult or controversial issues in open meetings. Half disagreed that management helped employees stay focused on what really matters. Nearly half (45 percent) disagreed that pointing out when promised deadlines or deliverables are not realistic would not be held against them. In discussions with FAA employees and in reviewing studies and reports on its acquisition process, we found further evidence of a link between FAA’s insufficient mission focus and the agency’s incentives. For example, the Associate Administrator for Research and Acquisitions described a “grow-your-own” development process at FAA. He said that a group of programs has emerged that does not reflect a unified approach to achieving the acquisition mission because program managers are rewarded for starting individual programs and getting them to advance, regardless of the long-term consequences. A 1995 internal FAA study on the use of support services contracts revealed incentives for focusing on short-term results. The study noted that (1) funding for program officials to pursue new projects appeared to be given a higher priority than funding for users to install purchased equipment and (2) a backlog in the installation and implementation of field equipment had risen to an equivalent of an estimated 1,300 staff years. According to this study, new equipment would likely continue to be backlogged and stored in warehouses unless the agency’s Airway Facilities division received increased resources for installation. In our view, this allocation of resources reflects a short-term emphasis on beginning new programs without considering the long-term implications for existing systems. A 1994 report on the AAS program by the Center for Naval Analyses (CNA) discussed organizational incentives that did not promote a strategic focus on FAA’s mission. According to CNA, FAA’s culture discouraged program officials from reporting news of cost increases, schedule delays, and performance problems with the AAS project. This suppression of bad news prevented top management from taking early action. Similarly, in a 1993 internal study of its process to determine system requirements, an FAA team reported that the agency did not reward employees for how well they met customers’ needs; instead, job standards reflected how a process was performed without regard to the effect on the agency’s overall performance or budget. In our 1992 review of the Defense Department’s management of acquisitions of major weapon systems, we found that the Department’s organizational culture allowed the needs of the participants in the acquisition process to create incentives for pushing programs and encouraging undue optimism, parochialism, and other compromises of good judgment. Consequently, problems persisted not because they were overlooked or underregulated but because they enabled more programs to survive and thus more participants’ needs to be met. For example, because the success of program managers depended on getting results (e.g., meeting the next major milestone), their strongest motivation was to keep the programs moving and to protect them from interruption. It is easy to understand why participants in the federal acquisition process, including FAA officials, are driven by these incentives. By analyzing mission needs, they risk raising questions about the need for “their” projects. By establishing realistic cost estimates, they may endanger the approval of near-term funding. By surfacing problems, they may expose their projects to heightened managerial and congressional oversight and risk criticism for their decisions and actions. By insisting on full testing before moving to production, they may delay a project’s schedule and cause it to receive reduced funding. Thus, employees are motivated to push ahead expeditiously with acquisitions. In organizations with more constructive cultures, employees feel more empowered and are more willing to be held accountable for decisions and actions. In a January 1996 memorandum to the FAA Administrator, the Department of Transportation’s Inspector General described an “environment for abuse” at FAA caused by the lack of accountability that reflected “a mind set within FAA that managers are not held accountable for decisions that reflect poor judgment.” We found that FAA’s acquisitions were impaired when officials were not held accountable for making decisions on system requirements and for exercising proper oversight of contracts. Both problems were commonly cited as reasons for the drastic restructuring of the AAS program. Because responsibility was diffused among many stakeholders in the acquisition process, establishing accountability for management decisions and actions was difficult. FAA’s multiple layers of management in its hierarchical structure have contributed to diffused responsibility and weak accountability. FAA program officials have not been held accountable for making and sustaining decisions on requirements for acquisitions of major systems. In 1993, an FAA internal review team reported that FAA’s process for making and documenting decisions on requirements lacked discipline and accountability: “No one person or organization has accountability for meeting mission requirements in a cost-effective manner.” As a result of this weak accountability, multiple changes in systems’ requirements have increased costs and delayed program schedules. For example, the program manager for the ARSR-4 project said that the schedule for making the first radar operational, planned for February 29, 1996, was delayed by the addition of two new requirements that necessitated more operational testing. These requirements were added within a day of putting the first radar into operation. “The systemic cultural problems of the FAA of diffusing responsibility plus an inability to hold firm on requirements has resulted in cost growth and schedule slips in the AAS program.” FAA’s difficulties in resolving requirements continued after the restructuring of its AAS project. The Department of Transportation’s Office of the Inspector General reported in October 1995 that FAA negotiated the contract for the Display System Replacement without including all known requirements in its specification document that was used as the basis for the negotiation. “. . . effectively paralyzed as a result of a succession of changes in operational specifications imposed from within the FAA’s Air Traffic Service . . . Ironically, most of the modifications were not associated with issues of increasing safety. . . . Some requirement changes went against the basic objective of the AMASS program.” If FAA officials had been held accountable for weighing the costs and benefits of requirement changes proposed by different stakeholders and limiting additions to the system’s performance requirements, the system might have been implemented in time to prevent this accident. FAA has identified contract administration as a material weakness in its acquisitions of major systems. The agency reported that senior management had not adequately focused on problems occurring when significant changes were made after a contract’s award and cited long delays between a problem’s recognition and correction. FAA concluded that because accountability for contract administration was not well-defined or enforced, program officials were not encouraged to exercise strong oversight of contractors. Over the years, poor oversight of contractors has caused acquisition problems in such projects as ODAPS, Mode S, and AAS. In 1990, we reported that FAA’s management actions to address development problems with Mode S were ineffective. We concluded that internal controls in the Mode S project were not adequate to ensure that appropriate action was taken when contract problems arose. At that time, the delivery of the first system had been delayed by 5 years.In 1992, we reported that program officials managing the ODAPS program were slow to address serious development problems with the system and failed to plan essential activities to ensure the program’s success. At that time, the system was 3 years behind schedule and had no projected completion date.In 1993, we reported that FAA’s inadequate oversight of the contractor responsible for developing AAS software was a major cause of the system’s cost increases and schedule delays. An FAA-contracted review of the AAS project reached similar conclusions in April 1994. CNA reported that FAA managers did not enforce such normal contract management procedures as continually monitoring expenditures, milestones, and deliverables. Past reviews of FAA and responses from employee surveys reflect an environment of control fostered by the agency’s hierarchical structure. In this environment, employees are not empowered to make needed management decisions. This lack of empowerment decreases their sense of ownership and responsibility, which in turns makes them more reluctant to be held accountable for their decisions and actions. In 1991, the NRC described FAA’s culture as a rigid hierarchy in which “upward communication is weak and personnel are expected to do what they are told without challenge.” These sentiments were echoed in a 1993 FAA employee survey in which a large percentage of employees involved in acquisitions responded that decisions were not being made at the most appropriate level and that they had problems with approvals they perceived to be unnecessary. Fewer than half reported that they had enough authority to make day-to-day decisions about day-to-day work problems. Results from the 1995 survey of these employees also showed a relationship between hierarchy, empowerment, and accountability. First, they identified the hierarchical structure as a concern. Most respondents (80 percent) reported that four or more layers of management review were between them and the head of their organization. More than half (52 percent) disagreed that any employee could easily access the head of their organization directly. Responses to this survey also showed they perceived a lack of empowerment and access to needed information. More than half (54 percent) of the respondents disagreed that employees knew that management listens because things changed as a result of their input. More than half (52 percent) disagreed that needed information flowed up and down freely in the acquisition organization. These difficulties with hierarchy and empowerment were also reflected in their attitudes regarding accountability. Nearly half (45 percent) disagreed that people who repeat mistakes are held accountable for their poor judgement; only a fifth (21 percent) agreed with the statement, and the remainder (34 percent) were unsure. A significant portion of the respondents (42 percent) agreed that it is difficult to hold individuals accountable because the way things are structured diffuses responsibility; a third disagreed; and the remainder (26 percent) were unsure. We have identified the need to change outdated hierarchical structures throughout the federal government. As we reported in March 1993, the centralized bureaucracies of the federal government—with their reliance on control through rules, regulations, and hierarchical chains of command designed in the 1930s and 1940s—simply do not function well in the rapidly changing society and economy of the 1990s, which are technology-driven and knowledge-intensive. We have also identified the need for broad changes to improve federal management by establishing accountability for achieving program results and emphasizing a long-term focus. In organizations with more constructive cultures, employees are more likely to involve others in decisions affecting them, openly share information, and resolve differences collaboratively. In FAA, ineffective coordination has caused the agency to acquire systems that cost more than anticipated and took longer to implement. One major factor deterring employees from working together is FAA’s organization of key players in the acquisition process into different divisions whose stovepipes or upward lines of authority and communications are separate and distinct. Poor coordination between FAA’s program offices and field organizations has caused schedule delays. Although coordination between program offices and field organizations is necessary to ensure that sites suitable for installing ATC systems are acquired and prepared, installations of the Terminal Doppler Weather Radar (TDWR), the Airport Surveillance Radar (ASR-9), and the Airport Surface Detection Equipment (ASDE-3) have all been delayed because of problems with putting these systems in the field. For example, as of March 1996, the implementation of the final 10 ASR-9 radars was being delayed because planned sites were not ready. Similarly, we reported in 1995 that FAA had to postpone TDWR’s implementation at 11 locations because of the unavailability of sites and land acquisition problems. FAA’s installation of ASDE-3 was also delayed. The system, as designed, was too heavy for many of the existing ATC towers where it was to be installed. In four of five regions, the initial implementation plans were not detailed enough for those regions to know where the towers should be located or how to construct them in time to meet the original schedule. AAS is an example of how poor coordination between developers and users of systems impaired an acquisition. In 1992, about 4 years after awarding the AAS contract, FAA announced that it would incur an additional $150 million in costs for design changes for the system’s tower component because the original design did not give controllers enough room to move around or visibility in the tower cab. If controllers and developers had collaborated to resolve these concerns during the original design phase, the additional expense to modify an awarded contract may have been avoided. In 1993, recognizing the agency’s difficulties in resolving requirements for AAS, FAA designated three top officials from the program office and its Air Traffic and Airway Facilities divisions to make final decisions on requirements. However, this group was unable to resolve important requirements for the system’s continuous operations. Recent work by the Department of Transportation’s Office of the Inspector General found that FAA officials planned to restructure the AAS contract before senior management and users of the system agreed on what was needed. A major factor limiting coordination among stakeholders in FAA’s acquisitions of major systems has been its organizational structure. Internal and external observers of FAA generally agree that organizational stovepipes have reduced coordination, increased systems’ costs, and delayed their implementation. FAA’s senior management has identified the agency’s current organizational structure as a problem that impairs ATC acquisitions. In May 1995, the FAA Administrator characterized the problem as a “hierarchical, stovepipe approach that in the past has often resulted in costly inefficiencies and a failure to deliver products in time to meet customer needs.” Similarly, in a December 1995 agency newsletter, FAA’s Deputy Administrator cited “the bureaucratic structures that have hampered the full utilization of the talent and energy that reside in FAA employees.” Earlier, in April 1994, the Assistant Administrator for Information Technology had recognized the effect of these stovepipes and the need to “change our ways of thinking—change our individual and corporate culture and change some of our traditional business practices.” Among the reviews describing the negative effect of FAA’s organizational structure on internal coordination during the acquisition process was a 1994 report by the Office of Technology Assessment (OTA) on aviation research. OTA noted that differences in the organizational culture among FAA’s air traffic controllers, equipment technicians, engineers, and divisional managers made communication difficult and limited coordination. Implementing these systems was often delayed because of a tendency for one stakeholder to establish technical requirements without adequately consulting those stakeholders responsible for developing the operational procedures that the systems were designed to support. According to OTA, when system operators were not consulted early in the development process, operational problems remained undetected until after a prototype of the system was developed and tested and procurement was imminent or underway. Employees involved in acquisitions have also described deficiencies in coordination and cooperation. A March 1992 survey of FAA’s research and acquisition staff found that its researchers did not focus adequately on what end-users, such as controllers, need or on how the technology would be deployed and maintained. FAA’s 1993 study of its process to establish requirements found that the agency’s operations and development sides have not formed a partnership to articulate requirements and devise a range of alternatives to meet them rapidly and cost-effectively. This study reported that the end customer is insufficiently involved in establishing system requirements. As a result, the study concluded that FAA functioned as a classically stovepiped organization in which operators and developers only came together at the Administrator’s level. Therefore, disputes regarding system requirements have been forced to a very high level before they can be resolved. More recently, results of FAA’s 1995 survey of acquisition employees showed that the agency has been making progress in promoting cooperation as an organizational value because nearly two-thirds (65 percent) of the respondents agreed that everyone is expected to coordinate with others who have a stake in the outcome of their work. The survey responses, however, indicate the need for FAA to enhance cooperation. More than half (53 percent) disagreed that employees value team achievement more than individual achievement. More than half (58 percent) disagreed that most tasks are assigned to teams rather than to individuals. In organizations with more constructive cultures, employees are more receptive to change and respond more positively to demands and opportunities posed within and outside that organization. FAA’s acquisitions of major ATC systems have been impaired because its employees resisted making needed changes in the agency’s approach to both specific acquisitions and its acquisition process as a whole. As a result, FAA has been less able to respond to changes in its internal and external environments. Institutional incentives that foster the status quo and high levels of management turnover are two factors hindering FAA’s adaptability. FAA’s reluctance to apply federal principles for acquisitions of major systems illustrates how the agency has resisted changing its acquisition process. For the first 10 years of its modernization program, FAA did not follow government acquisition policy and principles established by the Office of Management and Budget’s Circular A-109. These principles included analyzing mission needs, considering a full range of alternatives to meet them, and testing new systems operationally before committing to full production. In 1987, we recommended that FAA comply with these principles as a step toward alleviating the cost and schedule problems that had characterized the acquisition process since 1981. In 1991, FAA finally issued a revised order on major acquisitions that better reflected the phases and key decision points of Circular A-109. The results of an August 1995 internal FAA report summarizing management problems with AAS indicated that the 1991 order was not sufficient to overcome the agency’s resistance to changing its acquisition process. On the basis of findings from studies, the majority of which occurred after 1992, FAA’s report concluded that management actions concerning the AAS program “deliberately circumvented” the A-109 process. The MLS was one acquisition in which FAA officials resisted change despite powerful reasons to reconsider their decision. In the 1970s, because of limitations in its ILS and the expected large growth in air traffic operations, FAA decided to replace this system with the MLS. Despite pressure from such user groups as the airlines and general aviation, evidence that the ILS had been improved, lower-than-expected growth in air traffic, and the emergence of satellite-based navigation technology, FAA resisted changing its decision to acquire this system until 1993. The agency eventually terminated the MLS project in 1994 because the Global Positioning System (GPS), when enhanced, was expected to support all types of aircraft approaches. FAA’s attempt to implement cross-functional matrix teams responsible for acquisitions of major systems is an example of a new process that was undermined by management’s resistance to change. FAA began to implement cross-functional teams in 1990 with the creation of matrix teams, which consisted of staff and resources from various FAA functional divisions working together to develop and implement a project or group of projects. By assigning experts from each functional specialty to a project team, FAA hoped to improve coordination and communication. Although managers of each functional division represented in the matrix teams formally agreed to support them, by March 1992, employee survey results indicated that senior managers’ commitment to this concept was weakening and they continued to foster a “stovepipe” approach. The effects of FAA’s resistance to change on the agency’s ability to respond to external changes in technology and growth in aviation traffic have been cited by several sources. The Aircraft Owners and Pilots Association predicted in 1990 that the United States would have the technology to implement GPS by 1995 but expressed concern that FAA’s bureaucracy would slow this system’s implementation. The National Research Council concluded in its 1991 report that “FAA has not demonstrated the capacity to anticipate or respond to rapid changes in technology or the industry which it serves.” According to the Council, FAA’s failure to anticipate changes in the aviation industry resulting from deregulation caused delays in responding to the demands posed by increased air traffic. These delays engendered concerns about air safety and service. The 1994 Air Traffic Control Corporation Study found that FAA has been struggling to keep up with rapidly evolving technology, such as the use of GPS satellites for navigation purposes, despite its potential to improve safety substantially and reduce the cost of aircraft operations. The study’s executive oversight committee, consisting of the FAA Administrator, his Deputy, and other high-ranking aviation industry officials, concluded that “FAA is the weak link in the technological revolution.” The link between FAA’s organizational resistance to change and its organizational incentives has been cited by various sources within and outside the agency. For example, the Secretary of Transportation stated in July 1994 that “We need to change the whole culture of the ATC system to permit flexibility, ingenuity, and efficiency to come to the fore.” In May 1994, the executive oversight committee for the Air Traffic Control Corporation Study described FAA’s culture as one that “emphasizes conservatism and conformity, and lacks innovation.” The committee concluded that at FAA, “people are not used effectively in an acquisition system that discourages innovation and rewards them for following rules.” Most respondents to FAA’s 1993 survey of employees involved in acquisitions were skeptical that FAA would take advantage of opportunities to change. According to the results from FAA’s May 1995 survey, half of the respondents disagreed that management is open and responsive to change; and only a fifth of the respondents (21 percent) agreed with the statement that “management takes an active role in promoting innovative ideas proposed by employees;” or that employees are given “soft landings” when innovations result in failure (20 percent). FAA’s 1993 report on its process to determine requirements, which was based on interviews of managers, noted that organizational incentives promoted the status quo. One manager observed that FAA employees are not innovative because they are “beat over the head for identifying problems rather than rewarded for finding something that needs fixing.” Another manager noted that employees were not innovative because if “there’s a failure, the FAA puts in another rule.” Similarly, in 1991, the National Research Council’s report described FAA’s culture as one that is “resistant to innovation or rapid change and more disposed to avoiding criticism.” The report concluded that in order to change its culture, FAA must change its incentive system “from a bureaucratic one which rewards those who ’don’t make waves’ to one which encourages creative and innovative behavior.” We have expressed concerns over the years about the instability and uncertainty caused by the frequent turnover of FAA Administrators and observed that greater stability within the agency’s top leadership would enable FAA to effectively initiate and sustain corrective actions. Since its modernization program began, the average tenure for the Administrator or Acting Administrator has been less than 18 months. FAA has also experienced a high turnover rate for its most senior acquisition executive, who is charged with overseeing acquisition policy and program execution. Since 1990, five people have held that position. The frequent turnover of FAA’s Administrators has enabled them to focus on the short term and defer making tough decisions. As we reported in March 1993, the frequent turnover of FAA’s Administrators contributed to the delay in reaching a decision on the extent to consolidate air traffic facilities for the AAS project. This delay, in turn, contributed to schedule and cost problems and created uncertainty over the future of the project. CNA noted in its April 1994 report on the AAS project that the system’s design had never been changed from the original design, which was based on a consolidation plan that had been, for all practical purposes, previously abandoned. As a result, unneeded requirements were carried forward at high cost and technological complexity. This frequent management turnover has also led employees to believe that new initiatives will be short-lived. According to the 1991 National Research Council report, the short tenure of FAA Administrators has been a problem because it has created a resistance on the part of the bureaucracy to respond to new directions. Because FAA employees have believed that an Administrator is not likely to stay in office long enough to see new initiatives implemented, they have felt that those initiatives would likely be thwarted by bureaucratic inertia. Cultural change is a complex and time-consuming undertaking. Recognizing the need to improve its management of acquisitions through cultural change, FAA has developed and begun implementing a reform effort. Much work remains, however, before substantial cultural change is fully incorporated and can be sustained. A particular concern is the difficulties in gaining the strong commitment of all stakeholders throughout the agency. As currently designed, FAA’s reform effort does little to identify ways for obtaining this commitment. According to organizational theory and research, cultural change is a complex and time-consuming undertaking. Employees’ values, attitudes, and beliefs are affected by a wide range of internal and external forces. Dr. Joseph Coffee, who has studied cultural change in federal agencies, concluded that there is a direct relationship between the size of an organization and the number of variables that tend to maintain the status quo and, thus, have to be manipulated to bring about desired changes.Cultural change efforts typically take 5 or more years to fully implement. Through our management reviews of major federal departments and agencies over the past decade, we have identified diffused accountability and incentives that encourage short-term responses to long-term problems as fundamental challenges to improving an agency’s management. Moreover, the lack of coordination promoted by functionally organized divisional structures and institutional resistance to change are weaknesses commonly attributed to the bureaucratic structure that typifies many federal organizations. Dr. Coffee’s research found that federal executives have often focused on reorganizing and initiating new work processes, while paying little attention to culture, as ways to effect change. Many governmental efforts to promote change have emphasized that people should work more effectively across organizational lines. Organizations attempting to encourage more risk-taking and empowerment of lower-level employees while reducing the hierarchy and the number of rules have found their progress frustratingly slow. His study on cultural change in the federal government concluded that many efforts to promote change are not sufficiently comprehensive and do not address the many variables needed for success. For example, the study predicted that as cross-functional work groups are created, desired changes in behavior will less likely be produced if traditional functional structures are maintained. When this occurs, the “stovepiping” effect continues, and the values, beliefs, and behaviors of the employees are more likely to remain aligned with their functional division. From Dr. Coffee’s and others’ research, we conclude that managing cultural change requires a different set of management techniques and greater management sophistication in planning and implementation. By integrating current theories of effective management improvement initiatives, such as business process reengineering and results-oriented management, with traditional strategic planning precepts, we developed a strategy based on common components for managing organizational change. By focusing on employees’ beliefs, values, and attitudes; their behaviors; and the organization’s formal and informal structures, incentives, and policies, an organization can apply this comprehensive strategy to change its culture. Included in this strategy are the following components: Assess the current situation to determine the root cause of problems. Communicate the need to address the root cause of problems. Develop and communicate a vision for the future. Identify the factors that will impede change. Neutralize impediments to change. Identify and teach the skills required to make the change successful. Develop performance indicators to measure the extent to which the organization has achieved change. Implement the strategy for change. Use performance data to improve efforts to promote change. Appendix IV lists supporting actions that organizations could take to apply these nine components to change their culture. FAA’s primary reform effort for cultural change, the Integrated Product Development System (IPDS), began in November 1994. We found that FAA has made some progress in implementing its cultural change effort. A key area of concern is FAA’s difficulties in gaining the strong commitment to IPDS agencywide. As currently designed, this new system does little to address how FAA can gain this commitment. IPDS is at the core of FAA’s effort to improve its management of ATC acquisitions and its ability to provide modern and reliable ATC equipment. Although other initiatives underway elsewhere in the agency will probably affect its organizational culture, this system was designed explicitly to effect cultural change. A key component of the IPDS is the establishment of integrated product teams (IPT). These teams are designed to be cross-functional and responsible for research, development, and acquisition as well as for ensuring that new equipment is delivered, installed, and working properly. IPT members include systems and specialty engineers, logistics personnel, testing personnel, contract personnel, and lawyers as well as representatives from the organizations responsible for operating and maintaining the ATC equipment. In a complementary action, to mirror the structure of the IPTs, the divisions responsible for operating and maintaining ATC equipment have restructured their units that determine requirements. IPDS evolved from matrix management teams that FAA established in 1990 to promote cross-functional collaboration. Responsible for developing and implementing projects, matrix teams consisted of staff and resources from various FAA functional divisions. However, FAA’s management recognized that the matrix teams had continuing weaknesses, such as the lack of empowerment and accountability as well as the persistence of stovepiping. FAA managers developed and proposed IPDS to apply the successful parts of matrix teams while addressing their weaknesses. We found three reasons why this new system would likely prove more successful than the former matrix teams. For one, the new system recognizes the need to change the acquisition culture. Secondly, IPDS incorporates many aspects of the model strategy we present in this report. For example, to equip IPT members with the skills required in the new environment, FAA developed a training program for the teams that includes training on working together effectively, collaborative decision-making, and conflict resolution. Similarly, to convey their commitment to cultural change, managers in FAA’s Research and Acquisitions division (ARA) piloted a rewards program that recognizes teams as well as individuals for behaviors that lead to desired outputs. Thirdly, FAA developed guiding principles for its new system that address the agency’s deficiencies we identified in chapter 2. For instance, the IPDS emphasizes rewarding teamwork, communications, and innovation to address shortcomings in coordination and adaptability and emphasizes life-cycle management and team responsibility to address weaknesses in mission focus and accountability. FAA identifies the IPDS as an “implementing arm” of the new Acquisition Management System, which became effective on April 1, 1996. Provisions of the 1996 Department of Transportation Appropriations Act exempted FAA from most federal procurement and personnel laws and regulations.In response, FAA has announced its new acquisition management and human resource systems to implement provisions of the 1996 Act. The Acquisition Management System consists of three elements: The life-cycle acquisition management system is intended to be a more comprehensive, disciplined approach to managing the entire acquisition life cycle, from the analysis of mission needs to the eventual disposal of products. The procurement system is intended to allow FAA managers to be innovative and creative in selecting vendors and managing contracts. The acquisition work force learning system is intended to increase the capability of ARA employees and align the motivations of individuals with FAA’s overall goals. The concept behind the life-cycle acquisition management system is to improve coordination and mission focus by strengthening the “front-end” of the acquisition process. Specifically, the operators and developers are expected to work together to analyze mission needs and alternatives before senior management makes capital investment decisions and assigns projects to IPTs. The acquisition work force learning system is being designed to improve mission focus and increase empowerment, coordination, and adaptability by strengthening the competencies of employees and developing an environment of continuous learning. The new learning system is linked to the agency’s new competency-based human resource system that the agency is developing in response to statutory exemptions from federal personnel laws and regulations. It is too early to identify results of the new Acquisition Management System. However, by June 1996, some 19 months after beginning its reform effort, only 1 of FAA’s 13 IPTs had obtained approval of its team plan, an action FAA considers to be essential to successfully implement the new teams. These plans are important because they outline the team members’ roles, empowerment boundaries, and team operating approaches and procedures. Feedback from FAA employees and internal FAA reports indicate FAA’s difficulty in gaining commitment to the new system. Evidence of this problem was cited in a September 1995 internal FAA report summarizing the views of 50 senior and midlevel managers and technical employees who were interviewed about programs and functions affected by the formation of IPTs. According to FAA’s report, while support for the new system at the leadership level of the ARA and Air Traffic divisions appeared strong, interviewees expressed concerns over commitment of staff at the working level. Several respondents concluded that the Flight Standards and Airports divisions had not bought into the process. Our interviews with a cross section of oceanic IPT members revealed that FAA’s weaknesses in mission focus, accountability, coordination, and adaptability continue to undermine the IPDS initiative to effect organizational change. For example, comments suggested that some team members have remained motivated primarily by their functional division’s values and attitudes to the detriment of the team’s ability to focus on the agency mission of ATC acquisitions. Also, because some team members have not been empowered by midlevel managers who attempt to circumvent the team’s decision-making process, they continue to elevate disputes through the traditional stovepiped hierarchies. The internal “lessons learned” paper by the oceanic IPT concluded that lack of commitment exists because of doubts over whether empowerment had changed or would change; not all team members want the responsibility of empowerment, and some do not act accountably; empowerment supported by top management has been hampered by functional managers’ resistance; collocation is not supported by many functional managers; working as a team in a cross-functional manner is difficult for staff to some functional managers will not conduct business within the new staff who do not understand the new integrated product development system concept have to be worked around or through. Dr. Coffee’s research indicates that targeting a small segment of an organization is less likely to effect substantial change because the existing culture continues to shape the beliefs, values, and behaviors of the majority of the organization. If change is to occur, the different stakeholders have to be integrated into the effort to change so they come to value and support a different vision of their organization. The study concludes that when senior managers throughout the organization are supportive and involved in its efforts to change, the probabilities of sustaining change increase substantially. Implementation of the IPDS included a formal memorandum of support signed by senior management from the various stakeholder divisions in April 1995. The memorandum states generic roles in the acquisition process and the functional managers’ dedication to supporting the new IPTs. For example, ARA will “provide overall program oversight;” the Regulation and Certification division will “provide input to the IPTs on behalf of system users;” Air Traffic Services (ATS) will “initiate mission needs statements on behalf of system users;” and officials from the Airports division will “coordinate with ARA and ATS on functional requirements.” Of course, the memorandum, by itself, does not guarantee commitment. In implementing matrix teams, the predecessor of IPTs, FAA obtained the formal agreement of functional managers, who provided personnel to acquisition project teams, to support their staff in team roles. The results of a March 1992 survey of about 600 research and acquisition staff found, however, that (1) managers had not fully empowered employees, (2) team’s decisions had been second-guessed and/or overturned, (3) the commitment of senior mangers to matrix teams was weakening, and (4) senior managers continued to work as individuals, thus fostering a stovepiped approach. FAA management has recognized the risk for stovepipes to impede change. According to a senior ARA official responsible for planning and implementing the IPTs, the implementation of IPDS has been slowed because the key stakeholder groups have different values and objectives. For example, as a member of the acquisition reform task force studying the issue of life-cycle and workforce competencies, this official found that each division has had different ideas of what characterizes a competent workforce for the life-cycle of an acquisition. “The team-based performance philosophy of IPDS requires a culture and special organizational focus.... What matters is that the parochial motivations of functional organizations need to give way to true partnerships cutting across ’stovepipes’ in an integrative manner. The FAA IPDS model accomplishes this objective from a structure standpoint. What remains is the change in culture and thinking necessary to make it successful.” As designed, however, FAA’s reforms are likely to have a limited effect because they focus on IPT members and do little to neutralize the impediments to change. The 750 members of the 13 IPTs include only about 500 of the approximately 2,000 ARA employees and about 250 of the remaining FAA employees, including representatives from the other major stakeholder divisions—namely, the controllers and maintenance technicians who use and maintain the new equipment. The IPDS does little to identify how FAA can influence the beliefs, values, attitudes, and behaviors of FAA employees who are not members of IPTs. A comprehensive strategy would have defined responsibilities, provided performance measures, and described incentives for all stakeholders in the acquisition process to help make the IPDS a success and promote a more constructive culture throughout FAA. Changing FAA’s organizational culture will not occur overnight. Both organizational research and FAA’s experience have shown that much work remains before the agency’s shortcomings in mission focus, accountability, coordination, and adaptability are ameliorated. To FAA’s credit, the agency has recognized the importance of cultural change, and its Integrated Product Development System is a promising first step. However, FAA will not know whether this system has the potential to create and sustain a more constructive culture unless the agency is able to fully establish the integrated product teams and gain the strong commitment of all stakeholders to the new system. A comprehensive strategy for cultural change is needed that includes the means for obtaining the support throughout FAA. We recommend that the Secretary of Transportation direct the FAA Administrator to develop a comprehensive strategy for cultural change. This strategy should include specific responsibilities and performance measures for all stakeholders throughout the agency and provide the incentives needed to promote the desired behaviors and to achieve agencywide cultural change. We provided the Department of Transportation with a draft report for review and comment. We met with FAA officials, including the Director, Office of Acquisitions; the Chief of Staff to the Associate Administrator for Research and Acquisitions; and the Program Directors for Air Traffic Plans and Requirements and Airway Facilities Requirements. These officials generally agreed that our report provided an accurate history of FAA’s acquisition problems and correctly identified culture as a contributing factor. In concurring with our conclusions and recommendations, they told us that although FAA has made great strides toward changing its organizational culture, our report is correct in pointing out deficiencies that may prevent FAA from accomplishing such change. The Program Director, Air Traffic Plans and Requirements, emphasized that procedural deficiencies, such as weak controls over requirements changes, have been instrumental in causing past acquisition problems. He said that changing procedures could have an immediate, beneficial impact on the agency’s ATC acquisitions and that FAA has been making those changes. We agree that procedural deficiencies have caused problems with FAA’s acquisitions. Over the years, GAO reports have focused on these deficiencies. However, this review found that FAA’s culture is also a cause, and we believe FAA is correct in looking to cultural change as an important part of the solution. FAA officials also told us that our report should recognize the many structural and procedural initiatives throughout the agency that could improve its organizational culture. They told us, for example, that Offices for air traffic and airway facilities requirements were restructured to complement the establishment of IPTs. Airway Facilities’ business, strategic, and operational plans now address initiatives of the IPDS. ATS and ARA also instituted more discipline in the process for establishing and modifying requirements. It was not within the scope of our review to catalog and evaluate all of FAA’s initiatives that could potentially affect its culture. Our review focused instead on the agency’s primary reform effort—the IPDS—whose explicit purpose was to improve the acquisition process through cultural change. However, references to some of FAA’s initiatives were included, as appropriate, in the text. | Pursuant to a congressional request, GAO reviewed the Federal Aviation Administration's (FAA) management of its acquisition process, focusing on: (1) whether the FAA organizational culture has contributed to persistent acquisition problems; and (2) potential management improvements that could result from FAA organizational change. GAO found that: (1) the FAA organizational culture has been an underlying cause of FAA acquisition problems; (2) employees' attitudes do not reflect FAA focus on accountability, coordination, or adaptability; (3) FAA acquisition officials make little or no mission needs analyses, set unrealistic cost and schedule estimates, and begin production before systems development and testing is completed; (4) FAA fails to enforce accountability for defining systems requirements or for contract oversight; (5) the hierarchical FAA structure fosters a controlling environment, diminishes employee empowerment, and impedes information sharing; (6) FAA operations and development divisions have separate and distinct lines of authority and communications, which impedes coordination; (7) FAA officials are resistant to making needed changes in their acquisition process because FAA culture rewards conservatism and conformity and discourages innovation; (8) recognizing its need to improve the acquisition process through cultural change, FAA implemented a reform effort based on cross-functional, integrated product teams, and introduced a new acquisition management system; (9) FAA believes the product teams will improve accountability and coordination and infuse a more mission-oriented focus into the acquisition process; and (10) FAA has approved only one product team plan because it is still having difficulty in gaining the strong commitment of all employees who have a stake in the acquisition process and in forging partnerships across organizational divisions. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Emissions from a variety of human-generated sources, including commercial aircraft, trap heat in the atmosphere and contribute to climate change. During flight operations, aircraft emit a number of greenhouse gas and other emissions, including carbon dioxide, nitrogen oxides (NOx), soot, and water vapor. Figure 1 shows the primary emissions from commercial aircraft. Carbon dioxide emissions from aircraft are a direct result of fuel burn. For every gallon of jet fuel burned, about 21 pounds of carbon dioxide are emitted. Reducing the amount of fuel burned, therefore, also reduces the amount of carbon dioxide emitted. Water vapor emissions and certain atmospheric temperature and humidity conditions can lead to the formation of contrails, a cloudlike trail of condensed water vapor, and can induce the creation of cirrus clouds. Both contrails and cirrus clouds are believed to have a warming effect on the earth’s atmosphere. Aircraft also emit other pollutants that affect local air quality. Finally, airport operations are sources of greenhouse gas and other emissions, which we are not examining in this report. Historically, the commercial aviation industry has grown substantially in the United States and worldwide and is a contributor to economic growth. Between 1981 and 2008, passenger traffic increased 226 percent in the United States on a revenue passenger mile basis and 257 percent globally on a revenue passenger kilometer basis. According to the FAA, in 2006 the civil aviation industry in the United States directly and indirectly contributed 11 million jobs and 5.6 percent of total gross domestic product (GDP) to the U.S. economy. Globally, the International Air Transport Association estimated that in 2007 the aviation industry had a global economic impact of over $3.5 trillion, equivalent to about 7.5 percent of worldwide GDP. Recently, however, the airline industry has experienced declining traffic and financial losses as the result of the current recession. The fuel efficiency of commercial jet aircraft has improved over time. According to IPCC, aircraft today are about 70 percent more fuel efficient on a per passenger kilometer basis than they were 40 years ago because of improvements in engines and airframe design. The cost of jet fuel is a large cost for airlines. In the 2008, when global fuel prices were high, jet fuel accounted for about 30 percent of U.S. airlines’ total operating expenses, compared with 23 percent during 2007. Fuel efficiency (measured by available seat-miles per gallon consumed) for U.S. carriers increased about 17 percent between 1990 and 2008, as shown in figure 2. Internationally, according to the International Air Transport Association, fuel efficiency (measured by revenue passenger kilometers) improved 16.5 percent between 2001 and 2007. According to FAA, between 2000 and early 2008 U.S. airlines reduced fuel burn and emissions while transporting more passengers and cargo. In addition, commercial aviation has become less energy intensive over time—that is, to transport a single passenger a single mile uses less energy than it previously did, measured in British thermal units. See figure 3 showing energy intensity over time of aviation and other modes of transportation. However, despite these efficiency improvements, overall fuel burn and emissions of U.S. airlines are expected to grow in the future. FAA forecasts that between 2008 and 2025 fuel consumption of U.S.-based airlines will increase an average of 1.6 percent per year while revenue passenger miles will increase an average of 3.1 percent per year over the same period. As seen in figure 4, FAA forecasts that between 2008 and 2025 fuel consumption of U.S.-based airlines will increase an average of 1.6 percent per year. To develop a better understanding of the effects of human-induced climate change and identify options for adaptation and mitigation, two United Nations organizations established IPCC in 1988 to assess scientific, technical, and socio-economic information on the effects of climate change. IPCC releases and periodically updates estimates of future greenhouse gas emissions from human activities under different economic development scenarios. In 1999, IPCC released its report, Aviation and the Global Atmosphere, conducted at the request of the International Civil Aviation Organization (ICAO)—a United Nations organization that aims to promote the establishment of international civilian aviation standards and recommended practices and procedures. In 2007, IPCC released an update on emissions from transportation and other sectors called the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. These reports were developed with input from over 300 experts worldwide and are internationally accepted and used for policy-making. A variety of federal agencies have roles in addressing aviation emissions. In 2004, FAA and other organizations including the National Aeronautics and Space Administration (NASA) released a report, Aviation and the Environment: A National Vision Statement, Framework for Goals and Recommended Actions, through the collaborative PARTNER program, stating a general goal to reduce overall levels of emissions from commercial aviation and proposing actions to deal with aviation emissions. FAA also is involved in a number of emissions-reduction initiatives—including work on low-emissions technologies and low-carbon alternative fuels; the implementation of a new air traffic management system, the Next Generation Air Transportation System (NextGen); and climate research to better understand the impact of emissions from aviation. NASA has been involved in research that has led to the development of technologies that reduce aircraft emissions. Currently, NASA’s Subsonic Fixed-Wing project, part of its Fundamental Aeronautics program, aims to help develop technologies to reduce fuel burn, noise, and emissions in the future. Both FAA and NASA are involved in the Aviation Climate Change Research Initiative, whose goals include improving the scientific understanding of aviation’s impact on climate change. Also, as mandated under Title II of the Clean Air Act, the Environmental Protection Agency (EPA) promulgates certain emissions standards for aircraft and aircraft engines and has adopted emission standards matching those for aircraft set by ICAO. While neither ICAO nor EPA has established standards for aircraft engine emissions of carbon dioxide, ICAO is currently discussing proposals for carbon dioxide emissions standards and considering a global goal for fuel efficiency. In addition, in 2007 a coalition of environmental interest groups filed a petition with EPA asking the agency, pursuant to the Clean Air Act, to make a finding that “greenhouse gas emissions from aircraft engines may be reasonably anticipated to endanger the public health and welfare” and, after making this endangerment finding, promulgate regulations for greenhouse gas emissions from aircraft engines. International concerns about the contribution of human activities to global climate change have led to several efforts to reduce their impact. In 1992, the United Nations Framework Convention on Climate Change (UNFCCC)—a multilateral treaty whose objective is to stabilize greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous human interference with the climate system—was signed. By 1995, the parties to the UNFCCC, including the United States, realized that progress toward this goal was not sufficient. In December 1997, the parties reconvened in Kyoto, Japan, to adopt binding measures to reduce greenhouse gas emissions. Under the resulting Kyoto Protocol, which the United States has not ratified, industrialized nations committed to reduce or limit their emissions of carbon dioxide and other greenhouse gases during the 2008 through 2012 commitment period. The Protocol directed the industrialized nations to work through ICAO to reduce or limit emissions from aviation, but international aviation emissions are not explicitly included in Kyoto’s targets. In 2004, ICAO endorsed the further development of an open emissions trading system for international aviation, and in 2007 called for mutual agreement between contracting states before implementation of an emissions trading scheme. In part to meet its Kyoto Protocol requirements, the EU implemented its ETS in 2005, which sets a cap on carbon dioxide emissions and allows regulated entities to buy and sell emissions allowances with one another. In 2008, the European Parliament and the Council of the European Union passed a directive, or law, to include aviation in the ETS. Under the directive, beginning in 2012 a cap will be placed on total carbon dioxide emissions from all covered flights by aircraft operators into or out of an EU airport. Many stakeholders and countries have stated objections to the EU’s plans and legal challenges are possible. (See app. I for a discussion of the ETS’s inclusion of aviation.) In December 2009, the parties to the UNFCCC will convene in Copenhagen, Denmark, to discuss and negotiate a post-Kyoto framework for addressing global climate change. IPCC estimates that aviation emissions currently account for about 2 percent of global human-generated carbon dioxide emissions and about 3 percent of the radiative forcing of all global human-generated emissions (including carbon dioxide) that contribute to climate change. On the basis of available data and assumptions about future conditions, IPCC forecasted emissions to 2015 and forecasted three scenarios—low, medium, and high—for growth in global aviation carbon dioxide emissions from 2015 to 2050. These scenarios are driven primarily by assumption about economic growth—the factor most closely linked historically to the aviation industry’s growth—but they also reflect other aviation-related assumptions. Because IPCC’s forecasts depend in large part on assumptions, they, like all forecasts, are inherently uncertain. Nevertheless, as previously noted, IPCC’s work reflects the input of over 300 leading and contributing authors and experts worldwide and is internationally accepted and used for policy making. According to IPCC, global aviation contributes about 2 percent of the global carbon dioxide emissions caused by human activities. This 2 percent estimate includes emissions from all global aviation, including both commercial and military. Global commercial aviation, including cargo, accounted for over 80 percent of this estimate. In the United States, domestic aviation contributes about 3 percent of total carbon dioxide emissions, according to EPA data. Many industry sectors, such as the electricity-generating and manufacturing sectors, contribute to global carbon dioxide emissions, as do residential and commercial buildings that use fuel and power. The transportation sector also contributes substantially to global carbon dioxide emissions. Specifically, it accounts for about 20 percent of total global carbon dioxide emissions. Road transportation accounts for the largest share of carbon dioxide emissions—74 percent—from the transportation sector; aviation accounts for about 13 percent of carbon dioxide emissions from all transportation sources; and other transportation sources, such as rail, account for the remaining 13 percent. Figure 5 shows the relative contributions of industry, transportation, and all other sources to global carbon dioxide emissions and breaks down transportation’s share to illustrate the relative contributions of road traffic, aviation, and other transportation sources. When other aviation emissions—such as nitrogen oxides, sulfate aerosols, and water vapor—are combined with carbon dioxide, aviation’s estimated share of global emissions increases from 2 percent to 3 percent, according to IPCC. However, the impact of these other emissions on climate change is less well understood than the impact of carbon dioxide, making IPCC’s combined estimate more uncertain than its estimate for carbon dioxide alone. Aviation emissions may contribute directly or indirectly to climate change. Although most aviation emissions have a warming effect, sulfate aerosols and a chemical reaction involving methane have a cooling effect. The warming effect is termed “positive radiative forcing” and the cooling effect “negative radiative forcing.” Aviation emissions also may contribute to the formation of cirrus clouds, which can cause atmospheric warming, but the scientific community does not yet understand this process well enough to quantify the warming effect of aviation-induced cirrus clouds. Table 1 describes the direct or indirect effects of aviation emissions on climate change. According to IPCC, when the positive radiative forcing effects of carbon dioxide and the positive and negative radiative forcing effects of other aviation emissions are combined, global aviation contributes about 3 percent of human-generated positive radiative forcing. When the radiative forcing effects of the various aviation emissions are considered carbon dioxide, nitrogen oxides, and contrails have the greatest poten to contribute to climate change. The level of scientific understanding about the impact of particular aviation emissions on radiative forcing varies, making estimates of their impact on climate change uncertain to varying degrees. A recent report that described levels of scientific understanding of aviation emissions found that the levels for carbon dioxide were high; the levels for nitrogen oxides, water vapor, sulfates, and soot were medium; and the levels for contrails and aviation-induced cirrus clouds were low. Aviation’s contribution to total emissions, estimated at 3 percent, could be as low as 2 percent or as high as 8 percent, according to IPCC. Figure 6 shows IPCC’s estimate of the relative positive radiative forcing effects of each type of aviation emission for the year 2000. The overall radiative forcing from aviation emissions is estimated to be approximately two times that of carbon dioxide alone. IPCC generated three scenarios that forecasted the growth of global aviation carbon dioxide emissions from the near-term (2015) to the long- term (2050) and described these scenarios in its 1999 report. These forecasts are generated by models that incorporate assumptions about future conditions, the most important of which are assumptions about global economic growth and related increases in air traffic. Other assumptions include improvements in aircraft fuel efficiency and air traffic management and increases in airport and runway capacity. Because the forecasts are based on assumptions, they are inherently uncertain. Historically, global economic growth has served as a reliable indicator of air traffic levels. Aviation traffic has increased during periods of economic growth and slowed or decreased during economic slowdowns. As figure 7 shows, U.S and global passenger traffic (including the U.S.) generally trended upward from 1978 through 2008, but leveled off or declined during economic recessions in the United States. Forecast models described in IPCC’s report incorporate historical trends and the relationship between economic growth and air traffic to produce scenarios of global aviation’s potential future carbon dioxide emissions. IPCC used a NASA emissions forecast for carbon dioxide emissions until 2015. IPCC used an ICAO emissions forecasting model to forecast emissions from 2015 to 2050 using three different assumptions for global economic growth—low (2.0 percent), medium (2.9 percent), and high (3.5 percent). As a result, IPCC produced three different potential scenarios for future air traffic and emissions. The 2050 scenarios include a 40 percent to 50 percent increase in fuel efficiency by 2050 from improvements in aircraft engines and airframe technology and from deployment of an advanced air traffic management system (these are discussed in more detail below). Figure 8 shows IPCC’s low-, mid-, and high-range scenarios for carbon dioxide emissions for 2015, 2025, and 2050 as a ratio over 1990 emissions. IPCC used the medium economic growth rate scenario to estimate aviation’s contribution to overall emissions in 2050. IPCC compared aviation and overall emissions for the future and found that global aviation carbon dioxide emissions could increase at a greater rate than carbon dioxide emissions from all other sources of fossil fuel combustion. For example, for the medium GDP growth rate scenario, IPCC assumed a 2.9 percent annual average increase in global GDP, which translated into almost a tripling (a 2.8 times increase) of aviation’s global carbon dioxide emissions from 1990 to 2050. For the same medium GDP growth scenario, IPCC also estimated a 2.2 times increase of carbon dioxide emissions from all other sources of fossil fuel consumption worldwide during this period. Over all, using the midrange scenario for global carbon dioxide emissions and projections for emissions from other sources, IPCC estimated that in 2050, carbon dioxide emissions from aviation could be about 3 percent of global carbon dioxide emissions, up from 2 percent. IPCC further estimated that, when other aviation emissions were combined with carbon dioxide emissions, aviation would account for about 5 percent of global human-generated positive radiative forcing, up from 3 percent. IPCC concluded that the aviation traffic estimates for the low-range scenario, though plausible, were less likely given aviation traffic trends at the time the report was published in 1999. IPCC’s 2007 Fourth Assessment Report included two additional forecasts of global aviation carbon dioxide emissions for 2050 developed through other studies. Both of these studies forecasted mid- and high-range aviation carbon dioxide emissions for 2050 that were within roughly the same range as the 1999 IPCC report’s forecasts. For example, one study using average GDP growth assumptions that were similar to IPCC’s showed mid- and high-range estimates that were close to IPCC’s estimates. In 2005, FAA forecasted a 60 percent growth in aviation carbon dioxide and nitrogen oxide emissions from 2001 to 2025. However, FAA officials recently noted that this estimate did not take into account anticipated aircraft fleet replacements, advances in aircraft and engine technology, and improvements to the air transportation system, nor did it reflect the recent declines in air traffic due to the current recession. After taking these factors into account, FAA reduced its estimate in half and now estimates about a 30 percent increase in U.S. aviation emissions from 2001 to 2025. To account for some uncertainties in FAA’s emissions forecasting, FAA officials said they are working on creating future scenarios for the U.S. aviation sector to assess the influence of a range of technology and market assumptions on future emissions levels. While recent aviation forecasts are generally consistent with IPCC’s expectation for long-term global economic growth, the current economic slowdown has led to downward revisions in growth forecasts. For example, in 2008, Boeing’s annual forecast for the aviation market projected a 3.2 percent annual global GDP growth rate from 2007 to 2027. However, this estimate was made before the onset of negative global economic growth in 2009 and could be revised downward in Boeing’s 2009 forecast. According to FAA’s March 2009 Aerospace Forecast, global GDP, which averaged 3 percent annual growth from 2000 to 2008, will be 0.8 percent from 2008 to 2010 before recovering to an estimated average annual growth rate of 3.4 percent from 2010 to 2020. The International Air Transport Association has predicted that global air traffic will decrease by 3 percent in 2009 with the economic downturn. Moreover, according to the association, even if air traffic growth resumes in 2010, passenger air traffic levels will be 12 percent lower in the first few years after the slowdown and 9 percent lower in 2016 than the association forecasted in late 2007. To the extent that air traffic declines, emissions also will decline. In developing its forecasts, IPCC made assumptions about factors other than economic growth that also affected its for experts we interviewed, and FAA have noted: ecast results, as IPCC itself, IPCC assumed that advances in aircraft technology and the introduction new aircraft would increase fuel efficiency by 40 percent to 50 percent from 1997 through 2050. IPCC assumed that an ideal air traffic management system w place worldwide by 2050, reducing congestion and delays. However, the forecast doesn’t account for the possibility that some airlines might adopt low-carbon alternative fuels. IPCC assumed that airport and runway capacity would be sufficient to accommodate future air traffic levels. However, if IPCC’s assumptions about improvements in fuel efficiency and air traffic management are not realized, aircraft could produce higher emissions levels than IPCC estimated and IPCC’s estimates would be understated. Conversely, if airports and runways have less capacity than IPCC assumed, then air traffic levels could be lower and, according to IPCC and some experts, IPCC’s forecast could overstate future aviation emissions. Finally, IPCC pointed out that its estimate that aviation will contribute 5 percent of positive radiative forcing in 2050 does not include the potential impact of aviation-induced cirrus clouds, which could be substantial. Because IPCC’s forecasts are based on assumptions about future conditions and scientific understanding of the radiative forcing effects of certain aviation emissions is limited, IPCC’s forecasts are themselves uncertain. According to FAA officials, given the numerous assumptions and inherent uncertainties involved in forecasting aviation emissions levels out to the year 2050, along with the significant shocks and structural changes the aviation community has experienced over the last few years, IPCC’s projections are highly uncertain, even for the midrange scenario. If emissions from aviation and all other sectors continue to grow at about the same relative rate, aviation’s contribution as a portion of overall emissions will not change significantly. However, if significant reductions are made in overall emissions from other sources and aviation emission levels continue to grow, aviation’s contribution could grow. According to experts we interviewed, a number of different technological and operational improvements related to engines, aircraft design, operations, next-generation air traffic management, and fuel sources are either available now or are anticipated in the future to help reduce carbon dioxide emissions from aircraft. We interviewed and surveyed 18 experts in the fields of aviation and climate change and asked them to assess a number of improvements to reduce emissions using a variety of factors, such as potential costs and benefits, and then used the results to inform the following discussion. (Complete survey results can be found in app. III.) The development and adoption of low-emissions technologies is likely to be dependent upon fuel prices or any government policies that price aircraft emissions. Higher fuel prices or prices on emissions—for example through government policies such as an emissions tax—would make the costs of low-emissions technologies relatively cheaper and are likely to encourage their development. In addition, while fuel efficiency and emissions reductions may be important to airlines, so are a number of other factors, including safety, performance, local air quality, and noise levels, and trade-offs may exist between these factors. Improvements to aircraft engines have played a primary role in increasing fuel efficiency and reducing engine emission rates; experts we interviewed expect them to do so in the future—one study estimates that 57 percent of improvements in aircraft energy intensity between 1959 and 1995 were due to improvements in engine efficiency. Such improvements have resulted from increasing engine pressure and temperatures (which increases their efficiency and decreases fuel usage) and improving the “bypass ratio,” a measure of airflow through the engine. However, according to experts we surveyed, further advances in these technologies may face hi gh development costs (see table 2), and some may not be available for commercial use any time soon because engineers still face challenges in improving engine technology. Some technologies may be available sooner than others, but all present a range of challenges and tradeoffs: One latest-generation aircraft engine, the geared turbofan engine, is likely to be available for use in certain aircraft in the next few years; promises to reduce emissions according to its manufacturer, Pratt & Whitney; and may face few challenges to widespread adoption. According to Pratt & Whitney, this engine design is estimated to reduce fuel burn and emissions by 12 percent, compared with similar engines now widely used, in part due to an increase in the engine’s bypass ratio. The geared turbofan engine is the result of research conducted by NASA and Pratt & Whitney. Another engine technology, which could be introduced in the next 5 to 15 years, is the “open rotor” engine. It may deliver even greater emissions reductions but may face consumer-related challenges. The open rotor engine holds the engine fan blades on the outside of the engine case, thereby increasing the air flow around the engine, the effective bypass ratio, and the efficiency of the engine’s propulsion. However, this engine may be noisy and its large, visible engine blades could raise consumer concerns according to experts we surveyed. Research in the United States is currently a joint effort of NASA and General Electric. Rolls-Royce is also pursuing this technology. In the longer term, despite some engineering challenges, distributed propulsion technologies also hold promise for reducing aircraft emissions. Distributed propulsion systems would place many small engines throughout an aircraft instead of using a few large engines, as today’s aircraft do. Experts we interviewed said that engineering challenges must be overcome with distributive propulsion, including determining the best and most efficient way to distribute power and store fuel. NASA is currently involved in distributed propulsion research. Aircraft improvements also have played a role in reducing emissions rates in the past and experts we interviewed expected them to continue to do so. Through improvements in materials used to build aircraft and other improvements that increase aerodynamics and reduce drag, aircraft have become more fuel efficient over time. In the short term, improvements in aircraft materials, leading to decreased weight, and improvements in aerodynamics will help reduce fuel consumption and, thus, emissions rates. In the longer term, new aircraft designs, primarily a blended wing- body aircraft, hold potential for greater reductions in emissions rates. However, new aircraft concepts face engineering and consumer acceptance challenges and new technologies are likely to incur high development costs (see table 3). The following improvements to aircraft should help reduce aircraft fuel consumption and emissions in the long term, despite costs and challenges: The use of lightweight composite materials in aircraft construction has led to weight and fuel burn reductions in the past and is expected to continue to do so in the future. Over time, aircraft manufacturers have increasingly replaced more traditional materials such as aluminum with lighter-weight composite materials in airframe construction. For example, according to Boeing, 50 percent of the weight of the airframe of the Boeing 787, expected to be released in 2010, will be attributable to composite materials, compared with 12 percent composites in a currently available Boeing 777. According to Airbus, it first began using composite materials in airframe construction in 1985, and about 25 percent of the airframe weight of an A380 manufactured in 2008 was attributable to composites. By reducing the weight of the airframe, the use of composites reduces aircraft weight, fuel burn, and emissions rates. Retrofits such as winglets—wing extensions that reduce drag—can be made to aircraft to make them more aerodynamic but may have limited potential for future emissions reductions according to experts we surveyed. By improving airflow around wings, winglets reduce drag and improve fuel efficiency, thus reducing emissions by a modest amount. Boeing estimates that the use of winglets on a 737 reduces fuel burn by 3.5 percent to 4 percent on trips of over 1,000 nautical miles. Many new aircraft can be purchased with winglets, and existing aircraft also can be retrofitted with them. However winglets have already become very common on U.S. commercial airline aircraft and provide limited benefit for short-haul flights. According to experts we surveyed, there is low potential for future fuel consumption and emissions reductions from winglets. Redesigned aircraft, such as a blended wing-body aircraft—that is, an aircraft in which the body and wings are part of one airframe—hold greater potential for reducing emissions, according to experts we surveyed, though these face challenges as well. Several public and private organizations, including NASA and Boeing are conducting research on such aircraft. Many experts expect that blended wing-body aircraft will reduce emissions through improved aerodynamics and lighter weight. Estimates for potential emissions reductions include 33 percent compared with currently available aircraft according to NASA. However, these new designs face challenges; notably, according to experts we interviewed, development costs are likely to be substantial, their radically different appearance may pose consumer acceptance issues, and they may require investments in modifying airports. Airlines have already taken a number of steps to improve fuel efficiency over time; however, the potential for future improvements from these measures may be limited. Airlines have increased their load factors (the percentage of seats occupied on flights), increasing the fuel efficiency of aircraft on a per-passenger basis. Load factors were about 80 percent for U.S. carriers in 2008, compared with about 65 percent in 1995. However, some experts we interviewed said the potential for additional future emissions reductions from increasing load factors may be small because they are already so high. Airlines also have removed many unnecessary items from aircraft and minimized supplies of certain necessary items, such as water, carried on board. As a result, according to some experts we interviewed, there may be little additional improvement in reducing emissions by reducing on-board weight. Airlines also have made other voluntary operational changes to reduce emissions, such as reducing speeds on certain routes, which reduces fuel use, and washing aircraft engines to make them cleaner and more efficient. Airlines also have retired less-fuel-efficient aircraft and replaced them with more-fuel-efficient models. For example, in 2008, American Airlines announced it was replacing more of its fuel-inefficient MD-80 aircraft with more efficient Boeing 737-800 aircraft. In addition, Continental Airlines, in 2008, replaced regional jets with turboprop planes on many routes. Still other improvements also are available for airlines to reduce emissions in the future, but the experts we interviewed ranked the potential for emissions reductions and consumer acceptance of these improvements as low (see table 4). Airlines could make other operational changes to reduce fuel burn and emissions but are unlikely to do so, because the potential for consumer acceptance of such changes is low according to experts we surveyed. For example, aircraft could fly in formation to improve airflow and reduce fuel burn. More specifically, rather than flying individually, several aircraft could fly in proximity to one another, reducing drag of aircraft and subsequently fuel use. However, aircraft would fly closer to one another than FAA’s regulations currently allow and additional technological and aerodynamics research needs to be done. Another potential option, currently used for military purposes, is air-to-air refueling. Under this option, aircraft would be fueled in flight by tanker aircraft, reducing the amount and weight of fuel needed for the flight. However, DOT staff told us that air-to-air refueling may pose safety risks similar to those posed by formation flying. Some experts also have suggested that airlines make in- route on-ground fueling stops on long-haul flights, so they could reduce the amount of fuel they carry. However, more fueling stops could have negative effects on air quality at airports used for these stops as well as on air traffic operations. According to FAA, some of the air traffic management improvements that are part of NextGen—the planned air traffic management system designed to address the impacts of future traffic growth—can help reduce aircraft fuel consumption and emissions in the United States. Besides improving air traffic management, NextGen has environmental goals, which include accelerating the development of technologies that will lower emissions and noise. According to FAA, it is conducting a review to develop a set of NextGen goals, targets and metrics for climate change, as well as for noise and local air quality emissions. NextGen has the potential to reduce aircraft fuel burn by 2025, according to FAA, in part through technologies and procedures that reduce congestion and create more direct routing. Some procedures and technologies of NextGen have already been implemented and have already led to emissions reductions. Similarly, in Europe through the Single European Sky Air Traffic Management Research Program (SESAR), air traffic management technologies and procedures will be upgraded and individual national airspace systems will be merged into one, helping to reduce emissions per flight by 10 percent according to EUROCONTROL, the European Organization for the Safety of Air Navigation. However, some experts we met with said that because some of SESAR’s technologies and procedures have already been implemented, future fuel savings might be lower. Table 5 provides information on selected components of NextGen that hold potential for reducing aircraft emissions. NextGen has the potential to reduce fuel consumption and emissions through technologies and operational procedures: NextGen makes use of air traffic technologies to reduce emissions. For example, the Automatic Dependent Surveillance-Broadcast (ADS-B) satellite navigation system is designed to enable more precise control of aircraft during flight, approach, and descent, allowing for more direct routing and thus reducing fuel consumption and emissions. Also, Area Navigation (RNAV) will compute an aircraft’s position and ground speed and provide meaningful information on the flight route to pilots, enabling them to save fuel through improved navigational capability. NextGen Network-Enabled Weather will provide real-time weather data across the national airspace system, helping reduce weather-related delays and allowing aircraft to best use weather conditions to improve efficiency. NextGen also relies on operational changes that have demonstrated the potential to reduce fuel consumption and emissions rates. Continuous Descent Arrivals (CDA) allow aircraft to remain at cruise altitudes longer as they approach destination airports, use lower power levels, and therefore produce lower emissions during landings. CDAs are already in place in a number of U.S. airports and according to FAA, the use of CDAs at Atlanta Hartsfield International Airport reduces carbon dioxide emissions by an average of about 1,300 pounds per flight. Required Navigation Performance (RNP) also permits an aircraft to descend on a more precise route, reducing its consumption of fuel and lowering its carbon dioxide emissions. According to FAA, over 500 RNAV and RNP procedures and routes have been implemented. Funding and other challenges, however, affect FAA’s implementation of these various NextGen procedures and technologies. The use of alternative fuels, including those derived from biological sources (biofuels), has the potential to reduce greenhouse gas emissions from aircraft in the future; however, these fuels also present a number of challenges and environmental concerns. While the production and use of biofuels result in greenhouse gas emissions, the extent to which they provide a reduction in greenhouse gas emissions depends on whether their emissions on an energy-content basis are less than those resulting from the production and use of fossil fuels. To date, some assessments of biofuels have shown a potential reduction in greenhouse gas emissions when compared with fossil fuels, such as jet fuel. However, researchers have not agreed on the best approach for determining the greenhouse gas effects of biofuels and the magnitude of any greenhouse gas reductions attributable to their production and use. FAA, EPA, and U.S. Air Force officials we met with said that quantifying the life-cycle emission of biofuels is difficult, but work in this area is currently under way. For example, according to EPA, the agency has developed a comprehensive methodology to determine the life-cycle emissions, including both direct and indirect emissions, of a range of biofuels. This methodology, which involved extensive coordination with experts outside of and across the federal government, was included in the recent notice of proposed rulemaking on the renewable fuel standard. Non-oil-energy sources, such as hydrogen, have potential for providing energy for ground transport, but many experts we met with said that such sources are unlikely to have use for commercial aircraft given technological, cost, and potential safety issues. According to experts we interviewed, a variety of sources could be used to produce biofuels for aircraft, including biomasses such as switchgrass and forest and municipal waste; and oils from jatropha (a drought-resistant plant that can grow in marginal soil), algae, camelina (a member of the mustard family that can grow in semiarid regions), palm, and soy. However, many experts claim that some of these crops are unsuitable for use as biofuels because they may have negative environmental and economic consequences, such as potentially reducing the supply and quality of water, reducing air quality and biodiversity, and limiting global food supplies. For example, cultivating palm for biofuel production might lead to deforestation, thereby increasing both greenhouse gas emissions and habitat loss. In addition, jatropha has been identified as an invasive species in some regions and, because of its aggressive growth, may have the potential to reduce available habitat for native species. According to experts we met with, algae, on the other hand, are seen as a potentially viable source: they can be grown using saltwater and in a variety of other environments. In addition, according to DOT, camelina appears to be a potential biofuel source in the short term as it is not currently used for food and uses limited water for development. However, many experts we interviewed raised questions about the availability of future supplies of biofuels. According to the experts, large investments in fuel production facilities will likely be needed because little industrial capacity and compatible infrastructure currently exist to create biofuels. The cost of current algae conversion technology has, for example raised obstacles to the commercial-scale production needed to obtain significant supplies in the future. Given that future alternative fuels will have many uses, airlines will compete with other sources, including road transportation, for those limited supplies. Compared with the market for ground transport, the market for fuels for commercial aviation is small, leading some experts to believe that fuel companies are more likely to focus their biofuel efforts on the ground transport market than on the commercial aviation market. Some experts we met with said that given the relatively small size of the market, limited biofuel supplies should be devoted to road transportation since road transportation is the largest contributor of emissions from the transportation sector. A large number of industry and government participants, including airlines, fuel producers, and manufacturers, are currently conducting research and development on alternative fuels for aircraft. One effort is the Commercial Aviation Alternative Fuels Initiative, whose members include FAA, airlines, airports, and manufacturers. The goal of this initiative is to “promote the development of alternative fuels that offer equivalent levels of safety and compare favorably with petroleum-based jet fuel on cost and environmental bases, with the specific goal of enhancing security of energy supply.” Any developed biofuel will be subject to the same certification as petroleum-based jet fuel to help ensure its safety. In addition, other government efforts are under way, most notably the Biomass Research and Development Initiative. This initiative is a multiagency effort to coordinate and accelerate all federal biobased products and bioenergy research and development. The Department of Transportation is one of the initiative’s participants. Finally, the aviation industry has conducted a number of test flights using a mixture of biofuels and jet fuel. These test flights have demonstrated that fuel blends containing biofuels have potential for use in commercial aircraft. In February 2008, Virgin Atlantic Airlines conducted a demonstration flight of a Boeing 747 fueled by a blend of jet fuel (80 percent) and coconut- and babassu-oil-based fuels (20 percent). In December 2008, Air New Zealand conducted a test flight of a Boeing 747 fueled by a blend consisting of an equal mixture of jet fuel and jatropha oil. In January 2009, Continental Airlines conducted a test flight using a fuel blend of 50 percent jet fuel, and a jatropha and algae biofuel blend on a Boeing 737. In January 2009, Japan Airlines conducted a test flight of a Boeing 747 fueled by a blend including camelina oil. According to the airlines, the results of all these tests indicate that there was no change in performance when engines were fueled using the biofuel blends. For example, the pilot of the Air New Zealand test flight noted that both on- ground and in-flight tests indicated that the aircraft engines performed well while using the biofuel. Future fuel prices are likely to be a major factor in influencing the development of low-emissions technologies for commercial aviation. According to the airline industry, fuel costs provide an incentive for airlines to reduce fuel consumption and emissions. However, according to some experts we interviewed, short-term increases in fuel prices may not provide enough of an incentive for the industry to adopt certain low- emission improvements. For example, the commercial airlines would have greater incentive to adopt fuel saving technologies if the projected fuel savings are greater than the improvement’s additional life-cycle cost. The higher existing and projected fuel prices are, the more likely airlines would undertake such improvements, all else the same. One expert said that if fuel costs were expected to consistently exceed $140 per barrel in the future, much more effort would be made to develop a finished open rotor engine quickly. The price of fuel as a factor in providing an incentive for the development and adoption of low-emission technologies is seen in some historical examples in NASA research. While winglets were first developed through a NASA research program in the 1970s, they were not used commercially until a few years ago when higher fuel prices justified their price. Additionally, although NASA currently is sponsoring research into open rotor engines, the agency also did so in the 1980s in response to high fuel prices. That research was discontinued before the technology could be matured, however, when fuel prices dropped dramatically in the late 1980s. In addition, the current economic recession has impacted commercial airlines and may cause some airlines to cut back on purchases of newer and more fuel-efficient aircraft. For example, the U.S. airline industry lost about $3.7 billion in 2008, and while analysts are uncertain about its profitability 2009, some analysts predict industry profits of around $4 billion to $10 billion. In addition, Boeing has reported a number of recent cancellations of orders for the fuel-efficient 787 Dreamliner. According to one expert we met with, when airlines are low on cash, they are unlikely to undertake improvements that will reduce their fuel consumption and emissions, even if the savings from fuel reductions will ultimately be greater than the cost of the improvement because they have so little cash. This expert said, for example, that although it may make financial sense for airlines to engage in additional nonsafety-related engine maintenance to reduce fuel burn and emissions, they may not do so because they lack sufficient cash. Although some airlines may adopt technologies to reduce their future emissions, these efforts may not be enough to mitigate the expected growth in air traffic and related increase in overall emissions through 2050. Although IPCC’s forecast, as mentioned earlier, assumes future technological improvements leading to annual improvements in fuel efficiency, it excludes or doesn’t account for the possibility that some airlines might adopt biofuels or other potential breakthrough technologies. Nonetheless, even if airlines adopt such technologies, some experts believe that emissions will still be higher in 2050 under certain conditions than they were in 2000. One expert we met with did a rough estimate of future emissions from aircraft assuming the adoption of many low-carbon technologies such as blended wing-body, operational improvements, and biofuels. He used IPCC’s midrange forecast of emissions to 2050 as a baseline for future traffic and found that even assuming the introduction of these technologies, global emissions in 2050 would continue to exceed 2000 emissions levels. Had a lower baseline of emissions been used, forecasted emissions may have been lower. He acknowledged that more work needs to be done in this area. Another study by a German research organization modeled future emissions assuming the adoption of technological improvements, as well as biofuels, to reduce emissions. This study assumed future traffic growth averaging 4.8 percent between 2006 and 2026 and 2.6 percent between 2027 and 2050. While this study forecasted improvements in emissions relative to expected market growth, it estimated that by 2050 total emissions would still remain greater than 2000 emissions levels. Governments have a number of policy options—including policies that set a price on emissions, market-based measures like a cap-and-trade program or a tax, regulatory standards, and funding for research and development—they could use to help reduce greenhouse gas emissions from commercial aviation and other sectors of the economy. The social benefits (for example, resulting from emissions reductions) and costs associated with each option vary, and the policies may affect industries and consumers differently. However, economic research indicates that market-based policies are more likely to better balance the benefits and costs of achieving reductions in greenhouse gases and other emissions (or, in other words, to be more economically efficient). In addition, research and development spending could complement market-based measures or standards to help facilitate the development and deployment of low- emissions technologies. However, given the relatively small current and forecasted percentage of global emissions generated by the aviation sector, actions taken to reduce aviation emissions alone, and not emissions from other sectors, could be costly and have little potential impact on reducing global greenhouse gas emissions. Economists and other experts we interviewed stated that establishing a price on greenhouse gas emissions through market-based policies, such as a cap-and-trade program or a tax on emissions from commercial aircraft and other sources, would provide these sources with an economic incentive to reduce their emissions. Generally, a cap-and-trade program or an emissions tax (for example, on carbon dioxide) can achieve emissions reductions at less cost than other policies because they would give firms and consumers the flexibility to decide when and how to reduce their emissions. Many experts we surveyed said that establishing a price on emissions through a cap-and-trade program or a tax would help promote the development and adoption of a number of low-emissions technologies for airlines, including open rotor engines and blended wing-body aircraft. Another market-based policy, subsidy programs, such as a payment per unit of emissions reduction, can in principle provide incentives for firms and consumers to reduce their greenhouse gas emissions. However, subsidy programs need to be financed—for example through existing taxes or by raising taxes—and can create perverse incentives resulting in higher emissions. One market-based option for controlling emissions is a cap-and-trade program. Also known as an emissions trading program, a cap-and-trade program would limit the total amount of emissions from regulated sources. These sources would receive, from the government, allowances to emit up to a specific limit—the “cap.” The government could sell the allowances through an auction or provide them free of charge (or some combination of the two). In addition, the government would establish a market under which the regulated sources could buy and sell allowances with one another. Sources that can reduce emissions at the lowest cost could sell their allowances to other sources with higher emissions reduction costs. In this way, the market would establish an allowance price, which would represent the price of carbon dioxide (or other greenhouse gas) emissions. Generally, according to economists, by allowing sources to trade allowances, policy makers can achieve emissions reductions at the lowest cost. A cap-and-trade program can be designed to cap emissions at different points in the economy. For example, a cap-and-trade program could be designed to cap “upstream” sources like fuel processors, extractors, and importers. Under this approach, a cap would be set on the emissions potential that is inherent in the fossil fuel. The upstream cap would restrain the supply and increase the prices of fossil fuels and thus the price of jet fuel relative to less carbon-intensive alternatives. Alternatively, under a “downstream” program, direct emitters, such as commercial airlines, would be required to hold allowances equal to their total carbon emissions each year. (See fig. 9.) However, economic research indicates that both types of programs would provide commercial airlines with an incentive to reduce their fuel consumption in the most cost-effective way for each airline, such as by reducing weight, consolidating flights, or using more fuel-efficient aircraft, if they were included in such a program. To the extent that airlines would pass along any program costs to customers through higher passenger fares and shipping rates, travelers and shippers could respond in various ways, including by traveling less frequently or using a different, cheaper transportation mode. The effectiveness of a cap-and-trade program in balancing the benefits and costs of the emission reductions could depend on factors included in its design. Generally, by establishing an upper limit on total emissions from regulated sources, a cap-and-trade program can provide greater certainty than other policies (for example, an emissions tax) that emissions will be reduced to the desired level. Regulated sources would be required to hold allowances equal to their total emissions, regardless of the cost. However, allowance prices could be volatile, depending on factors such as changes in energy prices, available technologies, and weather, making it more expensive for sources to meet the cap. To limit price volatility, a cost- containment mechanism called a “safety valve” could be incorporated into the cap-and-trade program to establish a ceiling on the price of allowances. For example, if allowance prices rose to the safety-valve price, the government could sell regulated sources as many allowances as they would like to buy at the safety-valve price. Although the safety valve could limit price spikes, the emissions cap would be exceeded if the safety valve were triggered. In addition, the baseline that is used to project future emissions and set the emissions cap can affect the extent to which a cap-and-trade program will contain or reduce emissions. The point in time on which a baseline is set also can influence the environmental benefits of a cap-and-trade program. For example, some environmental interest groups in Europe have claimed that the environmental benefits of including aviation in the EU ETS will be minimal, since the emissions cap will be based on the mean average of aviation emissions from 2004 through 2006, leading to minimal future emissions reductions. In addition, industry groups and other experts have raised concerns that a cap-and-trade program could be administratively burdensome to the government, which would need to determine how to allocate the allowances to sources, oversee allowance trading, and monitor and enforce compliance with the program. Generally speaking, an upstream program may have lower administrative costs than a downstream program because it would likely involve fewer emissions sources. Some members of the aviation industry have said they view open and global cap-and-trade programs positively, although they report that not all types of cap-and-trade programs will work for them. For instance, ICAO and other industry organizations have said they would prefer an open cap- and-trade program (in which airlines are allowed to trade allowances with other sectors and sources) to a closed one (in which airlines are allowed to trade emissions allowances only with one another) because an open program would give airlines more flexibility in meeting their emissions cap. Staff we met with at the Association of European Airlines expressed willingness for aviation to participate in a cap-and-trade program as long as it is global in scope, is an open system, is not in addition to similar taxes, and does not double-count emissions. In addition, some industry groups and government agencies we met with said that a global program would best ensure that all airlines would take part in reducing emissions. Some countries are planning to address aviation emissions through cap- and-trade programs. The European Union originally implemented the EU ETS in 2005, covering industries representing about 50 percent of its carbon dioxide emissions. The EU is planning on including all covered flights by aircraft operators flying into or out of EU airports, starting in 2012. Please see appendix I for more details on the EU ETS, including a comprehensive discussion of the potential legal implications and stakeholders’ positions on this new framework. Other countries are considering cap-and-trade programs that would affect the aviation sector. In addition, the United States is currently considering and has previously considered cap-and-trade programs: H.R. 2454, the American Clean Energy and Security Act of 2009, 111th Cong. (2009), would create a cap-and-trade program for greenhouse gas emissions for entities responsible for 85 percent of emissions in the United States. The current language proposes to regulate producers and importers of any petroleum-based liquid fuel, including aircraft fuel, as well as other entities, and calls for an emissions cap in 2050 that would be 83 percent lower than 2005 emissions. The bill also calls for the emissions cap in 2012 to be 3 percent below 2005 levels, and in 2020 to be 20 percent below 2005 levels. In addition, the Obama Administration’s fiscal year 2010 budget calls for the implementation of a cap-and-trade program to regulate emissions in the United States. The budget calls for emissions reductions so that emissions in 2020 are 14 percent below 2005 levels and emissions in 2050 are 83 percent below 2005 levels. Additionally in this Congress, the Cap and Dividend Act, also proposes a cap-and-trade program for carbon dioxide emissions beginning in 2012, which would include jet fuel emissions. This program’s covered entities would include entities that would make the first sale in U.S. markets of oil or a derivative product used as a combustible fuel, including jet fuel. The bill would require the Secretary of the Treasury, in consultation with the EPA Administrator, to establish the program’s emission caps in accordance with the following targets: the 2012 cap would equal 2005 emissions; the 2020 cap would equal 75 percent of 2005 emissions; the 2030 cap would equal 55 percent of 2005 emissions; the 2040 cap would equal 35 percent of 2005 emissions; and the 2050 cap would equal 15 percent of 2005 emissions. A number of bills creating a cap-and-trade program also were introduced in the 110th Congress but did not pass. For example, a bill sponsored by Senators Boxer, Warner, and Lieberman would have established a cap-and- trade program that covered petroleum refiners and importers, among other entities. The costs of the regulation would have been borne by these refiners and importers who would likely have passed on those costs to airlines through increases in the price of jet fuel. An emissions tax is another market-based policy that could be used to reduce emissions from commercial aviation and other emissions sources. Under a tax on carbon dioxide (or other greenhouse gas), the government would levy a fee for every ton of carbon dioxide emitted. Similar to a cap- and-trade program, a tax would provide a price signal to commercial airlines and other emission sources, creating an economic incentive for them to reduce their emissions. A carbon tax could be applied to “upstream” sources such as fuel producers, which may in turn pass along the tax in the form of higher prices to fuel purchasers, including commercial airlines. Similar to a cap-and-trade program, emissions taxes would provide regulated sources including commercial airlines with an incentive to reduce emissions in the most cost-effective way, which might include reducing weight, consolidating flights, or using more fuel-efficient aircraft. According to economic theory, an emissions tax should be set at a level that represents the social cost of the emissions. Nonetheless, estimates of the social costs associated with greenhouse gas emissions vary. For example, IPCC reported that the social costs of damages associated with greenhouse gas emissions average about $12 per metric ton of carbon dioxide (in 2005 dollars) with a range of $3 to $95 per ton (in 2005 dollars). Economic research indicates that an emissions tax is generally a more economically efficient policy tool to address greenhouse gas emissions than other policies, including a cap-and-trade program, because it would better balance the social benefits and costs associated with the emissions reductions. In addition, compared to a cap-and-trade program, an emissions tax would provide greater certainty as to the price of emissions. However, it would in concept provide less certainty about emissions reductions because the reductions would depend on the level of the tax and how firms and consumers respond to the tax. Subsidies are another market-based instrument that could, in principle, provide incentives for sources to reduce their emissions. For example, experts we met with said that the government could use subsidies to encourage industry and others to adopt existing low-emissions technologies and improvements, such as winglets. In addition, some experts told us that NextGen-related technologies are candidates for subsidies because of the high costs of the technologies and the benefits that they will provide to the national airspace system. According to IPCC, subsidies can encourage the diffusion of new low-emissions technologies and can effectively reduce emissions. For example, as newer, more fuel- efficient engines are developed and become commercially available, subsidies or tax credits could lower their relative costs and encourage airlines to purchase them. Although subsidies are similar to taxes, economic research indicates that some subsidy programs can be economically inefficient, and need to be financed (for example, using current tax revenue or by raising taxes). For example, although some subsidy programs could lead to emissions reductions from individual sources, they may also result in an overall increase by encouraging some firms to remain in business longer than they would have under other policies such as an emissions tax. Both a cap-and-trade program and an emissions tax would impose costs on the aviation sector and other users of carbon-based fuels. The extent to which the costs associated with an emissions control program are incurred by commercial airlines and passed on will depend on a number of economic factors, such as the level of market competition and the responsiveness of passengers to changes in price. Officials of some industry organizations we met with said that because airlines are in a competitive industry with a high elasticity of demand, they are constrained in passing on their costs, and the costs to industry likely will be large. The Association of European Airlines reported that airlines will have very limited ability to pass on the costs of the EU ETS. Furthermore, the International Air Transport Association has estimated that the costs to the industry of complying with the EU ETS will be €3.5 billion in 2012, with annual costs subsequently increasing. Others we interviewed, however, stated that airlines will be able to pass on costs, and the increases in ticket prices will not be large. For example, the EU estimates that airlines will be able to pass on most of the costs of their compliance with the EU ETS, which will result in an average ticket price increase of €9 on a medium-haul flight. However, the revenue generated by the tax or by auctioning allowances could be used to lessen the overall impact on the economy, or the impact on certain groups (for example, low income) or sectors of the economy by, for example, reducing other taxes. Finally, according to some airline industry representatives, a program to control greenhouse gas emissions would add to the financial burden the aviation industry and its consumers already face with respect to other taxes and fees. For example, passenger tickets in the United States are subject to a federal passenger ticket tax of 7.5 percent, a segment charge of $3.40 per flight segment, and fees for security and airport facilities (up to $4.50 per airport). In addition, international flights are subject to departure taxes and customs-related fees. However, none of these taxes and fees attempt to account for the cost of greenhouse gas emissions, as a tax or cap-and-trade program would do. In addition, the revenue generated from an emissions tax or by auctioning allowances under a cap-and-trade program, could be used to offset other taxes, thereby lessening the economic impact of the program. Mandating the use of certain technologies or placing emissions limits on aircraft and aircraft engines are also potential options for governments to address aircraft emissions. Standards include both technology standards, which mandate a specific control technology such as a particular fuel- efficient engine, and performance standards, which may require polluters to meet an emissions standard using any available method. The flexibility in the performance standards reduces the cost of compliance compared with technology-based standards and, according to DOT, avoids potential aviation safety implications that may occur from forcing a specific technology across a wide range of operations and conditions. For example, by placing a strict limit on aircraft emissions, a standard would limit the emissions levels from an engine or aircraft. Regulations on specific emissions have been used to achieve specific environmental goals. ICAO’s nitrogen oxide standards place limits on nitrogen oxide emissions from newly certified aircraft engines. These standards were first adopted in 1981 and became effective in 1986. Although no government has yet promulgated standards on aircraft carbon dioxide emissions or fuel economy, emissions standards are being discussed within ICAO’s Committee on Aviation Environmental Protection and, in December 2007, a number of environmental interest groups filed petitions with EPA asking the agency to promulgate regulations for greenhouse gas emissions from aircraft and aircraft engines. In addition, the American Clean Energy and Security Act of 2009 would require EPA to issue standards for greenhouse gas emissions from new aircraft and new engines used in aircraft by December 31, 2012. Although standards can be used to limit greenhouse gas emissions levels from aircraft, economic research indicates that they generally are not as economically efficient as market-based instruments because they do not effectively balance the benefits and costs associated with the emissions reductions. For example, unlike market-based instruments, technology standards would give engine manufacturers little choice about how to reduce emissions and may not encourage them to find cost effective ways of controlling emissions. In addition, according to IPCC, because technology standards may require emissions to be reduced in specified ways, they may not provide the flexibility to encourage industry to search for other options for reducing emissions. However, according to EPA, performance standards to address certain emissions from airlines, such as those adopted by ICAO and EPA, gave manufacturers flexibility in deciding which technologies to use to reduce emissions. Nonetheless, although performance standards can provide greater flexibility and therefore be more cost-effective than technology standards, economic research indicates that standards generally provide sources with fewer incentives to reduce emissions beyond what is required for compliance, compared to market-based approaches. Moreover, standards typically apply to new, rather than existing, engines or aircraft, making new engines or aircraft more expensive, and as a result, the higher costs may delay purchases of more fuel-efficient aircraft and engines. Current international aviation standards also may require international cooperation. Because ICAO sets standards for all international aviation issues, it may be difficult for the U.S. government, or any national government, to set a standard that is not adopted by ICAO, although member states are allowed to do so. Industry groups we met with said that any standards should be set through ICAO and then adopted by the United States and other nations and, as mentioned earlier, some environmental groups have petitioned EPA to set such standards. Government-sponsored research into low-fuel consumption and low- emissions technologies can help foster the development of such technologies, particularly in combination with a tax or a cap-and-trade program. Experts we surveyed said that increased government research and development could be used to encourage a number of low-emissions technologies, including open rotor engines and blended wing-body aircraft. According to the Final Report of the Commission on the Future of the United States Aerospace Industry, issued in 2002, the lack of long- term investments in aerospace research is inhibiting innovation in the industry and economic growth. This study also asserted that national research and development on aircraft emissions is small when compared with the magnitude of the problem and the potential payoffs that research drives. Experts we met with said that government sponsorship is crucial, especially for long-term fundamental research, because private companies may not have a sufficiently long-term perspective to engage in research that will result in products for multiple decades into the future. According to one expert we interviewed, the return on investment is too far off into the future to make it worthwhile for private companies. NASA officials said that private industry generally focuses only on what NASA deems the “next generation conventional tube and wing technologies,” which are usually projected no more than 20 years into the future. Furthermore, raising fuel prices or placing a price on emissions through a tax or cap- and-trade program is likely to encourage greater research by both the public and private sectors into low-emissions technologies because it increases the pay off associated with developing such technologies. Various U.S. federal agencies, including NASA and FAA, have long been involved in research involving low-emissions technologies. For example, NASA’s subsonic fixed-wing research program is devoted to the development of technologies that increase aircraft performance, as well as reduce both noise levels and fuel burn. Through this program, NASA is researching a number of different technologies to achieve those goals, including propulsion, lightweight materials, and drag reduction. The subsonic fixed-wing program is looking to develop three generations of aircraft with increasing degrees in technology development and fuel burn improvements—the next-generation conventional tube and wing aircraft, the unconventional hybrid wing-body aircraft, and advanced aircraft concepts. NASA follows goals set by the National Plan for Aeronautics Research and Development and Related Infrastructure for fuel efficiency improvements for each of these generations (see table 6). However, budget issues may affect NASA’s research schedule. As we have reported, NASA’s budget for aeronautics research was cut by about half in the decade leading up to fiscal year 2007, when the budget was $717 million. Furthermore, NASA’s proposed fiscal year 2010 budget calls for significant cuts in aeronautics research, with a budget of $569 million. As NASA’s aeronautics budget has declined, it has focused more on fundamental research and less on demonstration work. However, as we have reported, NASA and other officials and experts agree that federal research and development efforts are an effective means of achieving emissions reductions in the longer term. According to NASA officials, the research budget for its subsonic fixed-wing research program, much of which is devoted to technologies to reduce emissions and improve fuel efficiency, will be about $69 million in 2009. FAA has proposed creating a new research consortium to focus on emissions and other issues. Specifically, FAA has proposed the Consortium for Lower Energy, Emissions, and Noise, which would fund, on a 50-50 cost share basis with private partners, research and advanced development into low-emissions and low-noise technologies, including alternative fuels, over 5 years. FAA plans that the consortium will mature technologies to levels that facilitate uptake by the aviation industry. The consortium contributes to the goal set by the National Plan for Aeronautics, Research and Development and Related Infrastructure to reduce fuel burn by 33 percent compared with current technologies. The House FAA Reauthorization Bill (H.R. 915, 111th Cong. (2009)) would provide up to $108 million in funding for the consortium for fiscal years 2010 through 2012. Lastly, the EU has two major efforts dedicated to reducing aviation emissions. The Advisory Council for Aeronautics Research in Europe (ACARE) is a collaborative group of governments and manufacturers committed to conducting strategic aeronautics research in Europe. According to officials with the European Commission Directorate General of Research, about €150 million to €200 million per year is devoted to basic research through ACARE. Another research effort in Europe is the Clean Sky Joint Technology Initiative, which will provide €1.6 billion over 7 years to fund various demonstration technologies. We provided a draft copy of this report to the Department of Defense, the Department of State, the Department of Transportation, the National Aeronautics and Space Administration, and the Environmental Protection Agency for their review. The Department of Defense had no comments. The Department of State provided comments via email. These comments were technical in nature and we incorporated them as appropriate. The Department of Transportation provided comments via email. Most of these comments were technical in nature and we incorporated them as appropriate. In addition, DOT stated that our statements indicating that the use of future technological and operational improvements may not be enough to offset expected emissions growth is not accurate given the potential adoption of alternative fuels. We agree that alternative fuels do have potential to reduce aircraft emissions in the future; to the extent that a low-emission (on a life-cycle basis) alternative fuel is available in substantial quantities for the aviation industry, emissions from the aviation industry are likely to be less than they otherwise would be. However, we maintain that given concerns over the potential environmental impacts of alternative fuels, including their life-cycle emissions, as well as the extent to which such fuels are available in adequate supplies at a competitive price, there may be a somewhat limited potential for alternative fuel use to reduce emissions from commercial aircraft in the future, especially the short term. DOT also suggested that we clarify the sources for our discussion about policy options that can be used to address aviation emissions. As much of that discussion is based on economic research and experience with market-based instruments and other policies, we clarified our sources where appropriate. NASA provided a written response (see app. V) in which they stated that our draft provided an accurate and balanced view of issues relating to aviation and climate change. NASA also provided technical comments that were incorporated as appropriate. EPA provided technical comments via email that were incorporated as appropriate and also provided a written response. (see app. VI). EPA was concerned that characterizing aircraft emissions standards as being economically inefficient especially compared to market-based measures, might lead readers to believe that emissions standards cannot be designed in a manner that fosters technological innovations and economic efficiency. EPA officials explained that, based on their experience, standards can be designed to optimize technical responses, provide regulated entities with flexibility for compliance and that studies show that EPA regulations have generated benefits in excess of costs. We agree that allowing regulated sources more flexibility in how they meet emissions standards can reduce the costs associated with achieving the emissions reductions. However, economic research indicates that for addressing greenhouse gas emissions, market-based measures such as emissions taxes or cap-and-trade programs would be economically efficient (that is, would maximize net benefits) compared to other approaches, in part because market-based measures can give firms and consumers more flexibility to decide when and how to reduce their emissions. Emissions standards, for example, generally give regulated sources fewer incentives to reduce emissions beyond what is required for compliance. The ultimate choice of what specific policy option or combination of options governments might use and how it should be designed is a complex decision and beyond the scope of our discussion. Finally, EPA was concerned that our draft report did not adequately discuss the increases in fuel consumption and emissions that have resulted from high rates of market growth and expected continued growth. We believe that our report adequately discusses fuel efficiency as well as fuel consumption and emissions output. In addition, our report discusses that aviation emissions are expected to grow in the long term, despite the potential availability of a number of technological and operational options that can help increase fuel efficiency. In response to this comment, we added additional information on forecasted fuel use by U.S.-based commercial airlines. We are sending copies of this report to the Secretaries of Defense, State, and Transportation and the Administrators of the Environmental Protection Agency and the National Aeronautics and Space Administration. This report is also available at no charge on the GAO Web site at http://www.gao.gov. The European Union’s recent decision to include aviation in the European Union’s Emissions Trading Scheme (EU ETS), which includes U.S. carriers flying in and out of Europe, is a complex and controversial matter. Preparations by U.S. carriers are already underway for 2012, the first year aircraft operators will be included in the ETS. The inclusion of aviation in the current EU ETS implicates a number of international treaties and agreements and has raised concerns among stakeholders both within and outside the United States. Many stakeholders within the United States have posed that the inclusion of aviation in the ETS violates provisions of these international agreements and is contrary to international resolutions. Others, primarily in Europe, disagree and find aviation’s inclusion in the current ETS to be well within the authority set forth in these agreements. In light of these disagreements, the EU may confront a number of hurdles in attempting to include U.S. carriers in the current EU ETS framework. In 2005, the EU implemented its ETS, a cap-and-trade program to control carbon dioxide emissions from various energy and industrial sectors. On December 20, 2006, the European Commission set forth a legislative proposal to amend the law, or directive, which established the ETS so as to include aviation in the ETS. On July 8, 2008, the European Parliament adopted the legislative resolution of the European Council and on October, 24, 2008, the Council adopted the directive, signaling its final approval. The directive was published in the Official Journal on January 13, 2009, and became effective on February 2, 2009. Under the amended ETS Directive, beginning on January 1, 2012, a cap will be placed on total carbon dioxide emissions from all covered flights by aircraft operators flying into or out of an EU airport. Emissions will be calculated for the entire flight. For 2012, the cap for all carbon dioxide emissions from covered flights will be set at 97 percent of historical aviation emissions. For the 2013-2020 trading period and subsequent trading periods, the cap will be set to reflect annual emissions equal to 95 percent of historical aviation emissions. The cap represents the total quantity of emissions allowances available for distribution to aircraft operators. In 2012 and each subsequent trading period, 15 percent of allowances must be auctioned to aircraft operators; the remaining allowances will be distributed to these aircraft operators for free based on a benchmarking process. Individual member states, in accordance with the EU regulation, will conduct the auctions for aircraft operators assigned to that member state. The auction of allowances will be open for anyone to participate. The number of allowances each member state has to auction depends on its proportionate share of the total verified aviation emissions for all member states for a certain year. The member states will be able to use the revenues raised from auctions in accordance with the amended directive. For each trading period, aircraft operators can apply to their assigned member state to receive free allowances. Member states will allocate the free allowances in accordance with a process the European Commission establishes for each trading period. After the conclusion of each calendar year, aircraft operators must surrender to their assigned member state a number of allowances equal to their total emissions in that year. If an aircraft operator’s emissions exceed the number of free allowances it receives, it will be required to purchase additional allowances at auction or on the trading market for EU ETS allowances. In addition, in 2012, aircraft operators will be able to submit certified emissions reductions (CER) and emission reduction units (ERU)—from projects in other countries undertaken pursuant to the Kyoto Protocol’s Clean Development Mechanism and Joint Implementation—to cover up to 15 percent of their emissions in lieu of ETS allowances. For subsequent trading periods, aircraft operators’ use of CERs and ERUs depends in part on whether a new international agreement on climate change is adopted. However, regardless of whether such an agreement is reached, in the 2013 through 2020 trading period, each aircraft operator will be allowed to use CERs and ERUs to cover at least 1.5 percent of their emissions. If a country not participating in the EU ETS adopts measures for reducing the climate change impact of flights to participating countries, then the European Commission, in consultation with that country, will consider options to provide for “optimal interaction” between the ETS and that country’s regulatory scheme—for example, the Commission may consider excluding from the ETS flights to participating EU ETS countries from that country. Although 2012 is the first year aircraft operators must comply with the ETS law, preparations in the EU and from U.S. carriers began soon after the law went into force. The inclusion of aviation in the newly amended EU ETS implicates a number of international agreements, policy statements, and a bilateral agreement specific to the United States, including the United Nations Framework Convention on Climate Change (UNFCCC), the Kyoto Protocol to the UNFCCC, the Convention on International Civil Aviation (the ‘Chicago Convention’), Resolutions of the International Civil Aviation Organization, and the U.S.-EU Air Transport Agreement (the ‘U.S.-EU Open Skies Agreement’). The UNFCCC, a multilateral treaty on global warming that was signed in 1992 and has been ratified by 192 countries, including the United States, seeks to “achieve stabilization of greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system.” Although the UNFCCC required signatory states to formulate a national response to climate change, its mitigation provisions did not require mandatory national emissions targets. In order to strengthen the commitments articulated in the UNFCCC, the Kyoto Protocol was developed within the UNFCCC’s framework and adopted in 1997. The Protocol entered into force in February 2005. The Kyoto Protocol established binding greenhouse gas emissions targets for a number of industrialized nations and the European Economic Community (EEC). Notably, the agreement required these industrialized nations and the EEC to pursue “limitations or reduction of emissions of greenhouse gases … from aviation … working through the International Civil Aviation Organization.” As of January 2009, 183 countries had ratified the Kyoto Protocol, but not the United States. Further, the Convention on International Civil Aviation, commonly known as the Chicago Convention, signed on December 7, 1944, sets forth rules on airspace, issues of sovereignty, aircraft licensing and registration, and general international standards and procedures, among others. Notably, the treaty sets forth sovereignty provisions, recognizing that a contracting state has exclusive sovereignty over airspace above its own territory. Provisions potentially applicable to the recent amendment incorporating aviation into the ETS include Article 11, Article 12, Article 15, and Article 24. Established by the Chicago Convention in 1944, the International Civil Aviation Organization (ICAO) is an agency of the United Nations and is tasked with fostering the planning and development of international aviation. ICAO has issued a number of Assembly Resolutions, which are statements of policy rather than law, including a nonbinding ICAO Resolution A36-22 relating to environmental protection and aviation emissions. This resolution, which supersedes ICAO Resolution A35-5 which had endorsed the further development of an open emissions trading scheme for international aviation, calls for mutual agreement between contracting states before implementation of an emissions trading scheme. Additionally, the Resolution formed a new Group on International Aviation and Climate Change (GIACC) that was tasked with developing and recommending to the ICAO Council a program of action to address international aviation and climate change. GIACC is due to report to the Council later this year. Finally, the U.S.-EU Air Transport Agreement, signed on April 25 and 30, 2007, and provisionally applied as of March 30, 2008, provided greater flexibility for flights between the United States and the EU, authorizing every U.S. and every EU airline to: operate without restriction on the number of flights, aircraft, and routes, set fares according to market demand; and enter into cooperative arrangements, including codesharing, franchising, and leasing. It includes enhanced opportunities for EU investment in carriers from almost 30 non-EU countries, and enhanced regulatory cooperation in regard to competition law, government subsidies, the environment, consumer protection, and security. Among the provisions potentially applicable to the newly amended EU ETS is Articles 12 relating to charges for use of airports and related facilities and services and Article 3 which prohibits a party from unilaterally limiting service or aircraft type. Although a number of international agreements, policy statements, and bilateral agreements are in place currently, climate change policies are constantly changing. In December 2009, the Conference of the Parties to the UNFCCC will meet in Copenhagen to discuss and negotiate an “agreed outcome” in order to implement the UNFCCC “up to and beyond 2012.” A number of stakeholders have expressed concern as to the legal basis for aviation’s inclusion in the EU ETS. In the United States, within the EU community, and in countries throughout the world, public and private entities, as well as legal scholars, have expressed opinions as to whether the inclusion of aviation into the ETS is in compliance with international law. Stakeholders within the United States, such as the executive branch, members of Congress, and the Air Transport Association (ATA), have weighed in on the legality of the newly amended EU ETS which requires compliance by U.S. carriers. In 2007 and 2008, the executive branch expressed the view that the imposition of the ETS was inconsistent with international law, specifically, the Chicago Convention and the U.S.-EU Air Transport Agreement. While the executive branch has not articulated a position on this issue since mid- 2008, it has expressed the importance of climate change and developing a solution on a global level. The Air Transport Association (ATA), a trade association representing principle U.S. airlines, also has concluded that the EU ETS’s inclusion of aviation violates international law, specifically the Chicago Convention. ATA argues that imposition of the ETS on U.S.-based carriers is contrary to Articles 1, 12, 11, 15 and potentially, in the alternative, Article 24. In summary, ATA argues that the ETS, as amended, violates Article 1 and Article 12 provisions of sovereignty and authority. Article 1, which provides contracting states exclusive sovereignty over their airspace, is violated by the EU’s extraterritorial reach which covers emissions of non- EU airlines in another states’ airspace. Further, Article 12, which requires contracting states to ensure that aircraft under its jurisdiction are in compliance with rules and regulations relating to the flight and maneuver of aircraft, also is violated. ATA argues that Article 12 gives ICAO primary authority, under the Convention, to set rules for the “flight and maneuver of aircraft” over the “high seas,” which precludes the applicatio of rules by one state over the airlines of another state to the extent inconsistent with ICAO rules. Thus, because ICAO has stated that one state can apply emissions trading to the airlines of another state only through mutual consent, ATA contends that the EU’s emissions trading nt coverage of non-agreeing-EU airlines over the high seas is inconsiste with ICAO’ s authority. Additionally, with respect to Article 11, ATA argues that although Article 11 provides authority to states to establish certain rules for admission and departure of aircraft, the authority is limited. States may only establish admission and departure rules consistent with the remainder of the Chicago Convention, which prevents the EU from arguing that Article 11 authorizes EU action. In any event, ATA contends that any rules may only apply “upon entering or departing from or while within the territory of that State,” whereas the European scheme reaches outside European territory. Further, ATA finds that the ETS is contrary to Article 15 of the Chicago Convention because it imposes a de facto charge for the right to enter or exit an EU member state. In the alternative, ATA argues that there could be a violation of Article 24 of the Convention, which exempts fuel on board an aircraft from duties, fees, and charges. Because the law calculates emissions based on fuel consumption, the purchase of greenhouse gas permits may constitute a “similar … charge” of fuel on board, according to ATA. Additionally, Article 24 mirrors Article 11 of the U.S.-EU Air Transport Agreement but extends the freedom from taxation/charges on fuel to that purchased in the EU. Thus, ATA argues, the prohibition against the EU levying a fuel tax applies to fuel already on board as well as fuel purchased in the EU. ATA has publicly expressed harsh opposition to the ETS’s inclusion of aviation and has stated that there will be a number of legal challenges from around the globe, including from the United States. ATA has additionally expressed discontent with the newly amended ETS law as a matter of policy, as it siphons money out of aviation which is counterproductive from reinvesting in improving technologies that reduce emissions. Finally, the Congress is considering the House FAA Reauthorization Bill, H.R. 915, 111th Cong. (2009), which includes an expression of the Sense of the Congress with respect to the newly amended EU ETS. The bill states that the EU’s imposition of the ETS, without working through ICAO, is inconsistent with the Chicago Convention, other relevant air service agreements, and “antithetical to building cooperation to address effectively the problem of greenhouse gas emissions by aircraft engaged in international civil aviation.” The bill recommends working through ICAO to address these issues. Stakeholders in the EU community and a not-for-profit business organization have expressed both legal and policy views on the newly amended ETS, as well. An independent contractor for the European Commission’s Directorate-General of the Environment (DG Environment) as well as the International Emissions Trading Association (IETA) have both issued opinions in support of aviation’s inclusion in the ETS. IETA supports the inclusion of aviation in the EU ETS from a policy perspective, but has not opined on the legality of its inclusion. From a policy standpoint, IETA supports aviation’s inclusion on both EU and non-EU carriers so as to share the burden to combat climate change. However, the organization has expressed concerns over a number of issues, some of which include access to project credits, amount of allowances available for auctioning, and allocation calculation. Id. at 170-73. consequently, Article 15 is inapplicable. Finally, Article 24 of the Convention does not apply to the Emissions Trading System because trading allowances are “fundamentally different from customs duties.” Additionally, the opinion finds policy support for these legal findings in ICAO Resolution A35-5 and bilateral air transport agreements. Additionally, countries outside the European Community have joined the United States in an expression of concerns regarding the imposition of the r ETS on non-EU carriers. In an April 2007 letter to the German Ambassado to the European Union, the United States, Australia, China, Japan, South Korea, and Canada conveyed a “deep concern and strong dissatisfactio for the then-proposal to include international civil aviation within the scope of the EU ETS. The letter asks that the EU ETS not include no aircraft unless done by mutual consent. Although supportive of the reduction of greenhouse gas emissions, the ascribing parties argue that th “unilateral” imposition of the ETS on non-EU carriers would potentially violate the Chicago Convention and bilateral aviation agreements with the parties to the letter. Moreover, they write, the proposal runs cou the international consensus that ICAO should handle matters of international aviation, which was articulated with the ICAO Assembly and the ICAO Council in 2004 and 2006, respectively. The letter closes with a n” nter to reservation of right to take appropriate measures under international law if the ETS is imposed. Given the controversial nature and complexity of aviation’s inclu the EU ETS, a number of scholars in the legal community, both within the United States and the EU, have provided explanatory articles or position papers on the issue of the consistency of the EU’s plans with its international legal obligations. One U.S. law review article by Daniel B. Reagan argues that international aviation emission reductions should be pursued through ICAO given the “political, technical, and legal implications raised by the regulation.” This article sets forth that politically, ICAO is the appropriate body because it can work towards uniformity in a complex regulatory arena, incidentally resulting in increased participation from a variety of stakeholders, reduction of resentment, and a reduced likelihood of non-compliance and legal challenges. Further, ICAO has the expertise necessary to technica design aviation’s emission reduction regime and is in a position to consider the “economic, political, and technical circumstances of its member states … .” Finally, Reagan argues that pursuing an emissions reduction regime through ICAO could avoid likely legal challenges which present themselves under the current ETS, as ICAO could facilitate a common understanding of contentious provisions. In conclusion, he proposes that the EU should channel the energy for implementation of the current regime into holding ICAO accountable for fulfilling environmental duties. In contrast, a law review article published in the European Environmen Law Review in 2007 by Gisbert Schwarze argues that bringing aviation in the EU ETS falls clearly within existing law and is, in fact, mandated. article presents the case that neither existing traffic rights in member states, bilateral air transport agreements, nor the Chicago Convention pose any legal obstacles. He argues, in fact, that the EU has a mandate under the UNFCCC and the Kyoto Protocol to implement climate change policies which include aviation. First, the article sets forth that the inclusion of aviation does not restrict existing traffic rights or allow or disallow certain aircraft operations in different member states, but ra merely brings the amount of emissions into the decision-making process. Further, Schwarze explains that imposing the ETS on carriers flying in and out of the EU is well within the Chicago Convention. Article 1 of the Convention provides contracting states exclusive sovereignty over their airspace which provides the EU with the authority to impose obligations relating to arrival and departures, so long as there is no discrimination on the basis of nationality, as required by Article 11. Additionally, the article sets forth that Article 12, regarding the flight and maneuver of aircraft, is not applicable because, as argued above, the ETS does not regulate certain aircraft operations. Article 15, which covers charges, is similarly inapplicable because emissions allowances on the free market or through the auctioning process do not constitute a charge. Finally, Article 24 is inapposite as well because the emissions trading system does not constitute a customs duty. Additionally, Schwarze argues that the bilateral air transport agreements with various nations, such as the Open Skies Agreement with the Unite d States, do not pose any legal barriers to inclusion of aviation in ETS. Id. at 12-13. These agreements contain a prohibition of discrimination similar to Art 11 of the Chicago Convention and a fair competition clause which require fair competition among signatories in international aviation as well as prohibits a party from unilaterally limiting traffic. The article argues so long as the ETS operates without discrimination, it is in conformity with the principle of a sound and economic operation of air services and therefore satisfies the fairness clause. Finally, since the ETS gives only incentive to reduce emissions, it does not regulate the amount of air traffic. Finally, Schwarze argues that not only is the inclusion of aviation into the EU ETS legally sound, the UNFCCC and Kyoto Protocol mandate its inclusion. The UNFCCC requires all parties to the treaty to adopt national policies and take corresponding measures on the mitigation of climate change consistent with the objective of the convention, recognizing that this can be done “jointly with other parties.” Additionally, the Kyoto Protocol, which sought to strengthen UNFCCC, required Annex 1 parties to pursue “limitations or reduction of emissions of greenhouse gases … from aviation … working through the International Civil Aviation Organization.” And finally, although not legally binding, ICAO Resolution A35-5 endorses the development of an open emissions trading system for international aviation. A re actions under international law if the ETS is imposed. If challenges are brought forth, they could potentially be brought forth under the Chicag o Convention, air service agreements (e.g., U.S.- EU Air Transport Agreement) or potentially in individual member state courts. Each o has its own dispute resolution procedure. If a challenge is brought forth under the Chicago Convention after fai negotiations, Article 84 of t he Convention (Settlement of Disputes) is invoked. Article 84 provides that if there is a disagreement by two or more contracting states which cannot be settled by negotiation, it will be decided upon by the Council. A decision by the Council can be appealed to an agreed-upon ad hoc tribunal or to the Permanent Court of International Justice (now the International Court of Justice) whose decision will be binding. Air service agreements additionally have dispute resolution procedures and the U.S.-EU Air Transport Agreement is no exception. Article 19 of the U.S.-EU Air Transport Agreement provides that parties to a dispute may submit to binding arbitration through an ad hoc tribunal if negotiations fail. If there is noncompliance with the tribunal’s decision and a subsequent agreement between the parties is not reached within 40 the other party may suspend the application of comparable ben arise under the agreement. The survey tool used to assess options for reducing commercial aircraft emissions is below, complete with detailed results. We do not include the responses for open-ended questions. Instructions for Completing This Tool You can answer most of the questions by checking boxes or filling in blanks. A few questions request short narrative answers. Please note that these blanks will expand to fit your answer. ease use your mouse to na Pl on the field or checkin or “Enter” keys, bec g the box you ish to fill in. ause doing so ay cause for m ut the do D ma cum ing o not use the “Tab tting prob ms. le ” To select a box, click on it once; to d select a box, double click on it. If you prefer, you may pr fax. Please use ext questions. We ask that you complet Rosenberg your desktop or hard drive and by January 9, 2009. Plea save the com leted document to e-mail it as an attachment to [email protected]. If you complete this tool by hand, please fax the completed tool to Matthew Rosenberg at GAO at 312-220-7726. If you have any quest Analyst, at 3 Assistant Director, at 312 enior 12-220-7645 or [email protected] or Cathy Colwell, ions, please co act Matthew senberg v. help. 1. How would you rate y our overall knowledge of technological options to reduc as aircraft costs of e aircraft carbon dioxide ( O) emissions, such engines and aircraft design technologies, and the those technologies? SKIP TO QUESTION #9 0 None SKIP TO QUESTION #9 6 Minimal 4 Basic CONTINUE TO QUESTION #2 1 Proficient CONTINUE TO QUESTION #2 TO QUESTION #2 7 Advanced CONTINUE2. In your expert opinio n, what is the potential for futur fuel savings and CO3. In your expert opinion, what would be the potential R&D costs to develop the following options for commercial use? a. Open rotor engines b. Geared Turbo Fan (Composites) e. 4. Given your answer to question two, what would be the potential costs to the air transport industry to procure, operate and maintain the following options to achieve those fuel savings an CO emissions reductions? Medium costs High costs Don’t know a. Open rotor engines b. Geared Turbo Fan ) e. 5. In your expert opinion, what is the level of public acceptance for the following conceptual options? a. Open rotor engines b. Geared Turbo Fan (Composites) e. 6. In your expert opinion, given our best knowledge about future market conditions, and abt government intervention, how sen long would it take for technolo the private sector to adopt these gies? timeframe (< 5 years) Medium timeframe (5 - 15 years) Long timeframe (> 15 a. Open rotor engines b. Geared Turbo Fan (Composites) e. a. Open rotor engines b. Geared Turbo Fan Engines c. d. (Composites) e. f. 9. How would you rate your overall knowledge of operational options to reduce aircraft fuel usage and CO? 11. In your expert opinion, what would be the potential R&D costs to develop the following options for commercial use? 12. Given your answer to question ten, what would be the potentia costs to the air transport in options to achieve those fuel savings and CO emissions reductions? dustry to adopt the following 13. In your expert opinion, what is the level of public acceptance for the following options? Limited use of paint on airframes Use of APU on ground at gate Automatic Dependent Surveillance – Broadcast (ADS-B) Required Navigation Performance (RNP) Continuous Descent Arrivals (CDA) 14. In your expert opinion, given our best knowledge about future market conditions, and absent government intervention, how long would it take for the priva e sector to adopt these t technologies? S timefr (< 5 years) Medium meframe ti (5 -15 years) (> 15 years) Never a. Reduction of on-board c. Limited use of paint on d. f. Use of APU on ground Surveillance – Broadcast (ADS-B) Performance (RNP) a. b. c. d. e. f. Use of APU on ground at gate g. Automatic Dependent Surveillance – Broadcast (ADS-B) (RNP) i. Continuous Descent Arrivals (CDA ) 17. How would you rate your overall knowledge of alternative fuel options to reduce aircraft COemissions, such as biofu els? 1 None 7 Minimal 2 Basic 8 3 Proficient CON 8 5 Advanced CONTINUE TO QUESTION #18 SKIP TO QUESTION #25 SKIP TO QU CONTINUE TO QU TION #1ES TINUE TO QU TION #1ES 18. In your exp ert opinion, compared to jet fuel currently in use, what is the potentia a life-cycle ba l for future reduction of sis) for the following options? CO 19. In your expert opinion, what would be the potential R&D costs to develop the following options for commercial use? a. Coal to liquid f. Hydrotreated Palm and Soy i. 20. In your expert opinion, what is the level of p for the following options? a. Coal to liquid f. Hydrotreated Palm and Soy i. 21. In your expert opinion, given our best knowledge about fu market conditions, and absent government inter long would it take for the private sector to adopt these technologies? ti (<10years) Medium timeframe (10-20 years) (> 20 years) a. Coal to li i. a. Coal to liquid f. Hydrotreated Palm and Soy Oils i. 24. What other government actions, if any, should be undertaken address greenhouse gas emissions from commercial aircraft ? 25. Do you have any other comments about anything covered in th rating tool? If so, please comment here. To address our objectives, we interviewed selected officials knowledgeable about the aviation industry, the industry’s impact on the production of greenhouse gas and other emissions that have an impact on the climate, and options for reducing these emissions. We interviewed federal officials from the Environmental Protection Agency (EPA), FAA, the National Aeronautics and Space Administration (NASA) and the Departments of Defense and State. We also met with representatives of ICAO—a United Nations agency. We interviewed representatives of industry groups, environmental groups, airlines, aircraft manufacturers, aircraft engine manufacturers, alternative fuels manufacturers, economists, and academics. We interviewed officials based in the United States and abroad. We interviewed representatives of the EU and associations about the EU ETS. We completed a literature search and reviewed relevant documentation, studies, and articles related to our objectives. To specifically address commercial aviation’s contribution to emissions, we asked our interviewees to identify the primary studies that estimate current and future emissions. As a result, we reviewed and summarized the findings of the 1999 International Panel of Climate Change Aviation and the Environment report and its 2007 Fourth Assessment Report, which were most frequently named as the most authoritative sources on global aviation emissions. To specifically address technological and operational options to reduce commercial aviation’s contribution to greenhouse gases and other emissions that can have an impact on the climate, we contracted with the National Academy of Sciences to identify and recruit experts in aviation and environmental issues. We interviewed 18 experts identified by the Academy, including those with expertise in aeronautics, air traffic management, atmospheric science, chemistry, climate change modeling, economics, environmental science, and transportation policy. In conducting these interviews, we used a standardized interview guide to obtain consistent answers from our experts and had the interviews recorded and transcribed. Based on these interviews, we assembled a list of options for reducing aviation emissions, and we asked our experts to assess these options on several dimensions. We provided each of our experts with a standardized assessment tool that instructed the experts to assess the potential of each technological and operational option on the following dimensions: potential fuel savings and emissions reductions, potential research and development costs, potential cost to the airline industry, potential for public acceptance, and time frames for adoption. For each dimension, we asked the experts to assess each option on a three-point scale. For example, we asked the experts to rate each option as having “low potential”, “medium potential”, or “high potential” for fuel savings and carbon dioxide emissions reductions. We directed the e not to answer questions about areas in which they did not have specific knowledge or expertise. As a result, throughout our report, the number o expert responses discussed for each emissions reduction option is sma than 18, the number of experts we interviewed. Besides asking the expert rcraft and to assess the potential of technological options, such as new ai ls engine designs, we asked them to assess the potential of alternative fue to reduce carbon dioxide emissions. Furthermore, for operational opti ons we asked the experts to assess included options that the federal government must implement, such as air traffic management improvements, as well as options that the airlines can exercise to reduc fuel burn. We analyzed and summarized the experts’ responses in order to identify those technological and operational options that the experts collectively identified as holding the most promise for reducing emissions. To analyze the results, for each option and dimension, we counted the numbers of experts that selected the “low,” “ responses. We then determined an overall, or group, answer for each q uestion based on the response the experts most commonly selected for each option and dimension. However, if approximately the same number o the group answer. For example, rather than reporting that the experts rated a particular option as having “high” potential, we instead reported that they rated it as having “medium-high” potential if approximately the same number of experts selected the “high” response as selected the “medium” response. Finally, if approximately the same number of experts medium,” and “high” f experts selected a second response, then we chose both responses as selected all responses, then we determined that there was no consensus on that question and reported the result as such. In order to determine government options for reducing aviation emissions, we interviewed relevant experts, including those 18 recruited by the National Academy of Sciences, about the potential use and the costs and benefits of these options. We asked our interviewees to provide opinions and information on a variety of governmental options, including carbon taxes, cap-and-trade programs, aircraft and engine standards, government- sponsored research, and governmental subsidies. We looked at governmental actions that have been taken in the past and at those that have been proposed. We reviewed economic research on the economic impact of policy options for addressing greenhouse gas emissions. Our review focused on whether policy options could achieve emissions reductions from global sources in an economically efficient manner (for example, maximize net benefits). We interviewed EU officials to understand how the EU ETS will work and to determine issues related to this scheme, which is slated to include certain flights into and out of EU airports starting in 2012. Additionally, we reviewed and summarized the EU ETS and the legal implications of the scheme (see app. I). In addition to the contact above, Cathy Colwell and Faye Morrison (Assistant Directors), Lauren Calhoun, Kate Cardamone, Brad Dubbs, Elizabeth Eisenstadt, Tim Guinane, Michael Hix, Sara Ann Moessbauer, Josh Ormond, Tim Persons (Chief Scientist), Matthew Rosenberg, and Amy Rosewarne made key contributions to this report. | Aircraft emit greenhouse gases and other emissions, contributing to increasing concentrations of such gases in the atmosphere. Many scientists and the Intergovernmental Panel on Climate Change (IPCC)--a United Nations organization that assesses scientific, technical, and economic information on climate change--believe these gases may negatively affect the earth's climate. Given forecasts of growth in aviation emissions, some governments are taking steps to reduce emissions. In response to a congressional request, GAO reviewed (1) estimates of aviation's current and future contribution to greenhouse gas and other emissions that may affect climate change; (2) existing and potential technological and operational improvements that can reduce aircraft emissions; and (3) policy options for governments to help address commercial aircraft emissions. GAO conducted a literature review; interviewed representatives of government agencies, industry and environmental organizations, airlines, and manufacturers, and interviewed and surveyed 18 experts in economics and aviation on improvements for reducing emissions from aircraft. GAO is not making recommendations. Relevant agencies provided technical comments which we incorporated as appropriate and EPA said emissions standards can have a positive benefit to cost ratio and be an important part of policy options to control emissions. According to IPCC, aviation currently accounts for about 2 percent of human-generated global carbon dioxide emissions, the most significant greenhouse gas--and about 3 percent of the potential warming effect of global emissions that can affect the earth's climate, including carbon dioxide. IPCC's medium-range estimate forecasts that by 2050 the global aviation industry, including aircraft emissions, will emit about 3 percent of global carbon dioxide emissions and about 5 percent of the potential warming effect of all global human-generated emissions. Gross domestic product growth is the primary driver in IPCC's forecasts. IPCC also made other assumptions about future aircraft fuel efficiency, improvements in air traffic management, and airport and runway capacity. IPCC's 2050 forecasts for aviation's contribution to global emissions assumed that emissions from other sectors will continue to grow. If other sectors make progress in reducing emissions and aviation emissions continue to grow, aviation's relative contribution may be greater than IPCC estimated; on the other hand, if other sectors do not make progress, aviation's relative contribution may be smaller than estimated. While airlines currently rely on a range of improvements, such as fuel-efficient engines, to reduce emissions, some of which may have limited potential to generate future reductions, experts we surveyed expect a number of additional technological, operational, and alternative fuel improvements to help reduce aircraft emissions in the future. However, according to experts we interviewed, some technologies, such as advanced airframes, have potential, but may be years away from being available, and developing and adopting them is likely to be costly. In addition, according to some experts we interviewed, incentives for industry to research and adopt low-emissions technologies will be dependent to some extent on the level and stability of fuel prices. Finally, given expected growth of commercial aviation as forecasted by IPCC, even if many of these improvements are adopted, it appears unlikely they would greatly reduce emissions by 2050. A number of policy options to address aircraft emissions are available to governments and can be part of broader policies to address emissions from many sources including aircraft. Market-based measures can establish a price for emissions and provide incentives to airlines and consumers to reduce emissions. These measures can be preferable to other options because they would generally be more economically efficient. Such measures include a cap-and-trade program, in which government places a limit on emissions from regulated sources, provides them with allowances for emissions, and establishes a market for them to trade emissions allowances with one another, and a tax on emissions. Governments can establish emissions standards for aircraft or engines. In addition, government could increase government research and development to encourage development of low-emissions improvements. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
FNS’ quality control system measures the states’ performance in accurately determining food stamp eligibility and calculating benefits. Under this system, the states calculate their payment errors by annually drawing a statistically valid sample of at least 300 to 1,200 active cases, depending on the average monthly caseload; by reviewing the case information; and by making home visits to determine whether households were eligible for benefits and received the correct benefit payment. FNS regional offices validate the results by reviewing a subset of each state’s sample to determine its accuracy, making adjustments to the state’s overpayment and underpayment errors as necessary. To determine each state’s combined payment error rate, FNS adds overpayments and underpayments, then divides the sum by total food stamp benefit payments. As shown in figure 1, the national combined payment error rate for the Food Stamp Program was consistently above 9 percent from fiscal year 1993 through fiscal year 1999. About 70 percent of the food stamp payment errors resulted in overpayments to recipients, while about 30 percent resulted in underpayments. FNS’ payment error statistics do not account for the states’ efforts to recover overpayments; in fiscal year 1999, the states collected $213 million in overpayments. (See app. II for information about states’ error rates and collections of overpayments.) Errors in food stamp payments occur for a variety of reasons. For example, food stamp caseworkers may miscalculate a household’s eligibility and benefits because of the program’s complex rules for determining who are members of the household, whether the value of a household’s assets (mainly vehicles and bank accounts) is less than the maximum allowable, and the amount of a household’s earned and unearned income and deductible expenses. Concerning the latter, food stamp rules require caseworkers to determine a household’s gross monthly income and then calculate a net monthly income by determining the applicability of six allowable deductions: a standard deduction, an earned income deduction, a dependent care deduction, a medical deduction, a child support deduction, and an excess shelter cost deduction. (See app. III for the factors that state caseworkers consider in calculating a household’s excess shelter cost deduction.) The net income, along with other factors such as family size, becomes the basis for determining benefits. Other payment errors occur after benefits have been determined primarily because households do not always report changes in income that can affect their benefits and the states do not always act on reported changes, as required by food stamp law. To reduce the likelihood of payment errors, FNS regulations require that states certify household eligibility at least annually, and establish requirements for households to report changes that occur after certification. In certifying households, states are required to conduct face- to-face interviews, typically with the head of the household, and obtain pertinent documentation at least annually. In establishing reporting requirements, the states have the option of requiring households to use either (1) monthly reporting, in which households with earned income file a report on their income and other relevant information each month; or (2) change reporting, in which all households report certain changes, including income fluctuations of $25 or more, within 10 days of the change. According to FNS, many states have shifted from monthly reporting to change reporting because of the high costs associated with administering a monthly reporting system. However, change reporting is error-prone because households do not always report changes and the states do not always act on them in a timely fashion, if at all. Each of the 28 states we contacted has taken many actions to reduce payment error rates. Further, 80 percent of the states took each of five actions: (1) case file reviews by supervisors or special teams to verify the accuracy of food stamp benefit payments, (2) special training for caseworkers, (3) analyses of quality control data to identify causes of payment errors, (4) electronic database matching to identify ineligible participants and verify income and assets, and (5) use of computer software programs to assist caseworkers in determining benefits. It is difficult to link a specific state action to its effect on error rates because other factors also affect error rates. However, almost all state food stamp officials cited case file reviews by supervisors and others as being one of their most effective tools for reducing error rates. Additionally, state officials most often cited the competing pressure of implementing welfare reform as the primary challenge to reducing food stamp payment errors in recent years. The following subsection summarizes our findings on state actions to reduce payment errors. Case file reviews to verify payment accuracy: In 26 of the 28 states we contacted, supervisors or special teams reviewed case files to verify the accuracy of benefit calculations and to correct any mistakes before the state’s quality control system identified them as errors. Supervisory reviews, used by 22 states, typically require that supervisors examine a minimum number of files compiled by each caseworker. For example, Alaska requires monthly supervisory review of five cases for each experienced caseworker and all cases for each new caseworker. Furthermore, 20 states, including many of the states using supervisory review, use special teams to conduct more extensive reviews designed to identify problems in specific offices, counties, or regions. Reviewers correct mistakes before they are detected as quality control errors, where possible; identify the reasons for the mistakes; and prescribe corrective actions to prevent future errors. For example, in Genesee County, Michigan, the teams read about 2,800 case files, corrected errors in nearly 1,800, and provided countywide training in such problem areas as shelter expenses and earned income. In Massachusetts, caseworkers reviewed all case files in fiscal year 2000 because of concerns that the state’s error rate would exceed the national average and that FNS would impose financial sanctions. Massachusetts corrected errors in about 13 percent of the case files reviewed; these would have been defined as payment errors had they been identified in a quality control review. Special training for caseworkers: In addition to the training provided to new caseworkers, 27 states provided a range of training for new and experienced caseworkers aimed at reducing payment errors. For example, these states conducted training specifically targeted to calculating benefits for certain categories of food stamp households, such as those with earned income or those with legal noncitizens, for which rules are more likely to be misapplied. Many states also conducted training to update caseworkers and supervisors on food stamp policy changes that affect how benefits are calculated; new policies often introduce new calculation errors because caseworkers are unfamiliar with the revised rules for calculating benefits, according to several state officials. Analysis of quality control data: Twenty-five states conducted special analyses of their quality control databases to identify common types of errors made in counties or local offices for use in targeting corrective actions. For example, California created a quality control database for the 19 largest of its 54 counties and generated monthly reports for each of the 19 counties to use. Georgia assigned a staff member to review each identified quality control error and work with the appropriate supervisor or worker to determine why the error occurred and how it could be prevented in the future. With this process, officials said, counties are much more aware of their error cases, and now perceive quality control as a tool for reducing errors. In Michigan, an analysis of quality control data revealed that caseworkers were misinterpreting a policy that specified when to include adults living with a parent in the same household, and changes were made to clarify the policy. Electronic database matching: All 28 states matched their food stamp rolls against other state and federal computer databases to identify ineligible participants and to verify participants’ income and asset information. For example, all states are required to match their food stamp rolls with state and local prisoner rolls. In addition, most states routinely match their food stamp participants with one or more of the following: (1) their department of revenue’s “new hires” database (a listing of recently employed individuals in the state) to verify income, (2) the food stamp rolls of neighboring states to identify possible fraud, and (3) their department of motor vehicle records to verify assets. Officials in four states said the “new hires” match reduced payment errors by allowing caseworkers to independently identify a change in employment status that a household had not reported and that would likely affect its benefits. Mississippi food stamp officials said the vehicle match helped reduce payment errors because caseworkers verified the value of applicants’ vehicles as part of determining eligibility. Computer assistance in calculating benefits: Twenty-three states had developed computer software for caseworkers to use in determining an applicant’s eligibility and/or in calculating food stamp benefit amounts. Twenty-two of the states have software that determines eligibility and calculates benefits based on information caseworkers enter; the remaining states’ software is limited to calculating benefits after the caseworker has determined eligibility. These programs may also cross- check information to correct data entry errors; provide automated alerts that, for example, a household member is employed; and generate reminders for households, for example, to schedule an office visit. The most advanced software programs had online interview capabilities, which simplified the application process. Some states had automated case management systems that integrated Food Stamp Program records with their Medicaid and other assistance programs, which facilitated the administration of these programs. Some states took other actions to reduce their payment errors. For example, even though FNS regulations only require that food stamp households be recertified annually, 16 states increased the frequency with which certain types of food stamp households must provide pertinent documentation for recertifying their eligibility for food stamp benefits. In particular, 12 of the 16 states now require households with earned income to be recertified quarterly because their incomes tend to fluctuate, increasing the likelihood of payment errors. More frequent certification enables caseworkers to verify the accuracy of household income and other information, allowing caseworkers to make appropriate adjustments to the household’s benefits and possibly avoid a payment error. However, more frequent certification can also inhibit program participation because it creates additional reporting burdens for food stamp recipients. In addition to more frequent certification, five states reported that they access credit reports and public records to determine eligibility and benefits. Seven states have formed change reporting units in food stamp offices serving certain metropolitan areas, so that participants notify these centralized units, instead of caseworkers, about starting a new job or other reportable changes. Food stamp officials in 20 of the 28 states told us that they have primarily relied on case file reviews by supervisors and others to verify payment accuracy and reduce payment errors. For example, Georgia officials noted one county’s percentage of payment errors dropped by more than half as a result of the state’s requirement that management staff in 10 urban counties re-examine files after a supervisor’s review. In each of the past 3 years, Ohio food stamp administrators have reviewed up to 100 cases per county per year and have awarded additional state funding to counties with low error rates. In fiscal year 1999, the counties used $2.5 million in state funds primarily for payment accuracy initiatives. There was less consensus about the relative usefulness of other initiatives in reducing payment errors. Specifically, food stamp officials in 13 states told us that special training for caseworkers was one of their primary initiatives; officials in 8 states cited recertifying households more frequently; officials in 6 states identified the use of computer software to determine eligibility and/or benefits; officials in 5 states identified computer database matches; and officials in 4 states cited analyses of quality control data. Food stamp officials in 22 of the states we contacted cited their states’ implementation of welfare reform as a challenge to reducing error rates in recent years. In particular, implementing welfare reform programs and policy took precedence over administering the Food Stamp Program in many states—these programs competed for management attention and resources. In Connecticut, for example, caseworkers were directed to help participants find employment; therefore, the accuracy of food stamp payments was deemphasized, according to state officials. Similarly, Hawaii officials said agency leadership emphasized helping recipients to find employment and instituted various programs to accomplish this, which resulted in less attention to payment accuracy. Furthermore, officials from 14 states said welfare reform led to an increase in the number of working poor. This increased the possibility of errors because the income of these households is more likely to fluctuate than income of other food stamp households. State food stamp officials cited three other impediments to their efforts to reduce payment errors, although far less frequently. First, officials in 12 states cited a lack of resources, such as a shortage of caseworkers to manage food stamp caseloads, as a challenge to reducing error rates. Georgia, Mississippi, and Texas officials said caseworker turnover was high, and New Hampshire officials said they currently have a freeze on hiring new caseworkers. Second, officials in 10 states cited problems associated with either using, or making the transition from, outdated automated systems as challenges to reducing payment errors. For example, New Hampshire officials found that their error rate increased from 10.2 percent in fiscal year 1998 to 12.9 percent in fiscal year 1999 after they began to use a new computer system. In addition, Connecticut and Maryland officials noted that incorporating rules changes into automated systems is difficult and often results in error-prone manual workarounds until the changes are incorporated. Finally, officials in nine states told us that food stamp eligibility revisions in recent years, particularly for legal noncitizens, have increased the likelihood of errors. To encourage the states to reduce error rates, FNS has employed financial sanctions and incentives, approved waivers of reporting requirements for certain households, and promoted initiatives to improve payment accuracy through the exchange of information among the states. However, state food stamp officials told us the single most useful change for reducing error rates would be for FNS to propose legislation to simplify requirements for determining Food Stamp Program eligibility and benefits. Simplifying food stamp rules would not necessarily alter the total amount of food stamp benefits given to participants, but it may reduce the program’s administrative costs (the states spent $4.1 billion to provide $15.8 billion in food stamp benefits in fiscal year 1999). FNS officials and others expressed concern, however, that some simplification options may reduce FNS’ ability to precisely target benefits to each individual household’s needs. The three principal methods FNS has used to reduce payment errors in the states are discussed in the following subsections. As required by law, FNS imposes financial sanctions on states whose error rates exceed the national average. These states are required to either pay the sanction or provide additional state funds—beyond their normal share of administrative costs—to be reinvested in error-reduction efforts, such as additional training in calculating benefits for certain households. FNS imposed $30.6 million in sanctions on 16 states with payment error rates above the national average in fiscal year 1999 and $78.2 million in sanctions on 22 states in fiscal year 1998—all of which were reinvested in error- reduction efforts. (See app. IV.) Food stamp officials in 22 states reported that their agencies had increased their commitment to reducing payment errors in recent years; officials in 14 states stated that financial sanctions, or the threat of sanctions, was the primary reason for their increased commitment. For example, when the Texas Department of Human Services requested money to cover sanctions prior to 1995, the Texas legislature required the department to report quarterly on its progress in reducing its payment error rate. Officials in Texas, which has received enhanced funding for the past 2 fiscal years, cited the department’s commitment and accountability to the Texas legislature as primary reasons for reducing the error rate over the years and for maintaining their focus on payment accuracy. FNS also rewards states primarily on the basis of their combined payment error rate being less than or equal to 5.9 percent—well below the national average. FNS awarded $39.2 million in enhanced funding to six states in fiscal year 1999 and $27.4 million to six states in fiscal year 1998. In the past 5 years, 16 states have received enhanced funding at least once. Officials in one state told us that the enhanced funding remained in the state’s general fund, while officials in four states said the enhanced funding supplemented the state’s appropriation for use by the Food Stamp Program and other assistance programs. For example, in Arkansas, the food stamp agency used its enhanced funding for training, systems development, and equipment. Arkansas officials told us that enhanced funding was a major motivator for their agency, and they have seen an increase in efforts to reduce payment errors as a direct result. In July 1999, FNS announced that it would expand the availability of waivers of certain reporting requirements placed on food stamp households. FNS was concerned that the increase in employment among food stamp households would result in larger and more frequent income fluctuations, which would increase the risk of payment errors. FNS also was concerned that the states’ reporting requirements were particularly burdensome for the working poor and may, in effect, act as an obstacle to their participation in the program. This is because eligible households may not view food stamp benefits as worth the time and effort it takes to obtain them. As of November 2000, FNS had granted reporting waivers to 43 states, primarily for households with earned income. (See app. V.) The three principal types of waivers are explained below: The threshold reporting waiver raises the earned income changes that households must report to more than $100 per month. (Households still must report if a member gains or loses a job.) Without this waiver, households would be required to report any wage or salary change of $25 or more per month. Ohio uses this type of waiver (with a smaller $80-per-month threshold) specifically for self-employed households. Ohio credits the use of this and other types of reporting waivers to the decrease in its error rate from 11.2 percent in 1997 to 8.4 percent in 1999. The status reporting waiver limits the changes that households must report to three key events: (1) gaining or losing a job, (2) moving from part-time to full-time employment or vice versa, and (3) experiencing a change in wage rate or salary. This waiver eliminates the need for households to report fluctuations in the number of hours worked, except if a member moves from part-time to full-time employment. Texas officials cited the implementation of the status reporting waiver in 1994 as a primary reason that their error rate dropped by nearly 3 percentage points (from over 12 percent) in 1995. Texas’ error rate reached a low of about 4.6 percent in 1999. The quarterly reporting waiver eliminates the need for households with earned income to report any changes during a 3-month period, provided the household provides required documentation at the end of the period. The waiver reduces payment errors because any changes that occurred during a quarter were not considered to be errors and households more readily understood requirements for reporting changes. Food stamp officials in Arkansas, which implemented a quarterly reporting waiver in 1995, believe that their quarterly reporting waiver is a primary reason for their subsequent stable error rate. FNS expects that reporting waivers will reduce the number of payment errors made because households are more likely to report changes and, with fewer reports to process, the states will be able to process changes accurately and within required time frames. However, the lower payment error rates that result from these waivers are primarily caused by a redefinition of a payment error, without reducing the Food Stamp Program’s benefit costs. For example, a pay increase of $110 per month that is not reported until the end of the 3-month period is not a payment error under Arkansas’ quarterly reporting waiver, while it would be an error if there were no waiver. As a result, the quarterly reporting waiver may reduce FNS’ estimate of overpayments and underpayments. FNS estimated, in July 1999, that the quarterly waiver would increase food stamp benefit costs by $80 million per year, assuming that 90 percent of the states applied for the waiver. Of the 10 states that do not have a reporting waiver, 7 require monthly reporting for households with earned income. The advantage of monthly reporting is that benefits are issued on the basis of what has already occurred and been documented. In addition, regular contact with food stamp households allows caseworkers to quickly detect changes in the household’s situation. However, monthly reporting is more costly to administer and potentially can exacerbate a state’s error rate, particularly if it cannot keep up with the volume of work. A Hawaii food stamp official told us that monthly reporting contributed to recent increases in Hawaii’s error rate because caseworkers have not processed earned income changes on time, while Connecticut officials said their food stamp workers were making mistakes by rushing to meet deadlines. As part of the food stamp quality control program, FNS’ seven regional offices have assembled teams of federal and state food stamp officials to identify the causes of payment errors and ways to improve payment accuracy. Each region also has held periodic conferences in which states from other regions were invited to highlight their successes and to respond to questions about implementing their initiatives. Examples of topics at recent conferences in FNS’ northeastern region included best payment accuracy practices and targeting agency-caused errors. FNS’ regional offices also have made funds available for states to send representatives to other states to learn first-hand about initiatives to reduce payment errors. Since 1996, FNS has compiled catalogs of states’ payment accuracy practices that provide information designed to help other states develop and implement similar initiatives. Food stamp officials in all 28 states we contacted called for simplifying complex Food Stamp Program rules, and most of these states would like to see FNS involved in advocating simplification. In supporting simplification, the state officials generally cited caseworkers’ difficulty in correctly applying food stamp rules to determine eligibility and calculate benefits. For example, Maryland’s online manual for determining a household’s food stamp benefits is more than 300 pages long. Specifically, the state officials cited the need to simplify requirements for (1) determining a household’s deduction for excess shelter costs and (2) calculating a household’s earned and unearned income. Food stamp officials in 20 of the 28 states we contacted said simplifying the rules for determining a household’s allowable shelter deduction would be one of the best ways to reduce payment errors. The Food Stamp Program generally provides for a shelter deduction when a household’s monthly shelter costs exceed 50 percent of income after other deductions have been allowed. Allowable deductions include rent or mortgage payments, property taxes, homeowner’s insurance, and utility expenses. Several state officials told us that determining a household’s shelter deduction is prone to errors because, for example, caseworkers often need to (1) determine whether to pro-rate the shelter deduction if members of a food stamp household share expenses with others, (2) determine whether to use a standard utility allowance rather than actual expenses, and (3) verify shelter expenses, even though landlords may refuse to provide required documentation. Food stamp officials in 18 states told us that simplifying the rules for earned income would be one of the best options for reducing payment errors because earned income is both the most common and the costliest source of payment errors. Generally, determining earned income is prone to errors because caseworkers must use current earnings as a predictor of future earnings and the working poor do not have consistent employment and earnings. Similarly, officials in six states told us that simplifying the rules for unearned income would help reduce payment errors. In particular, state officials cited the difficulty caseworkers have in estimating child support payments that will be received during the certification period because payments are often intermittent and unpredictable. Because households are responsible for reporting changes in unearned income of $25 or more, differences between estimated and actual child support payments often result in a payment error. FNS officials and advocates for food stamp participants, however, have expressed concern about some possible options for simplifying the rules for determining eligibility and calculating benefits. For example, in determining a household’s allowable shelter deduction, if a single standard deduction were used for an entire state, households in rural areas would likely receive greater benefits than they would have using actual expenses, while households in urban areas would likely receive smaller benefits. In this case, simplification may reduce FNS’ ability to precisely target benefits to each individual household’s needs. FNS officials also pointed out that likely reductions in states’ payment error rates would reflect changes to the rules for calculating food stamp benefits rather than improved performance by the states. FNS has begun to examine alternatives for improving the Food Stamp Program, including options for simplifying requirements for determining benefits, as part of its preparations for the program’s upcoming reauthorization. More specifically, FNS hosted a series of public forums, known as the National Food Stamp Conversation 2000, in seven cities attended by program participants, caseworkers, elected officials, antihunger advocates, emergency food providers, health and nutrition specialists, food retailers, law enforcement officials, and researchers. Simplification of the Food Stamp Program was one of the issues discussed at these sessions as part of a broad-based dialogue among stakeholders about aspects of the program that have contributed to its success and features that should be strengthened to better achieve program goals. FNS is currently developing a variety of background materials that will integrate the issues and options raised in these forums. FNS has not yet begun to develop proposed legislation for congressional consideration in reauthorizing the Food Stamp Program. FNS and the states have taken actions aimed at reducing food stamp payment errors, which currently stand at about 10 percent of the program’s total benefits. Financial sanctions and enhanced funding have been at least partially successful in focusing states’ attention on minimizing errors. However, this “carrot and stick” approach can only accomplish so much, because food stamp regulations for determining eligibility and benefits are extremely complex and their application inherently error-prone and costly to administer. Furthermore, this approach, carried to extremes, can create incentives for states to take actions that may inhibit achievement of one of the agency’s basic missions—providing food assistance to those who are in need. For example, increasing the frequency that recipients must report income changes could decrease errors, but it could also have the unintended effect of discouraging participation by the eligible working poor. This would run counter not only to FNS’ basic mission but also to an overall objective of welfare reform—helping people move successfully from public assistance into the workforce. Simplifying the Food Stamp Program’s rules and regulations offers an opportunity to, among other things, reduce payment error rates and promote program participation by eligible recipients. FNS has taken initial steps in examining options for simplification through its forums with stakeholders. However, it is unclear the extent to which FNS will build on these ideas to (1) systematically develop and analyze the advantages and disadvantages of various simplification options, and (2) if warranted, submit the legislative changes needed to implement simplification proposals. To help ease program administration and potentially reduce payment errors, we recommend that the Secretary of Agriculture direct the Administrator of the Food and Nutrition Service to (1) develop and analyze options for simplifying requirements for determining program eligibility and benefits; (2) discuss the strengths and weaknesses of these options with representatives of the congressional authorizing committees; and (3) if warranted, submit legislative proposals to simplify the program. The analysis of these options should include, among other things, estimating expected program costs, effects on program participation, and the extent to which the distribution of benefits among recipients could change. We provided the U.S. Department of Agriculture with a draft of this report for review and comment. We met with Agriculture officials, including the Director of the Program Development Division within the Food and Nutrition Service’s Food Stamp Program. Department officials generally agreed with the information presented in the report and provided technical clarifications, which we incorporated as appropriate. Department officials also agreed with the thrust of our recommendations. However, they expressed reservations about the mechanics of implementing our recommendation that they discuss simplification options with representatives of the congressional authorizing committees. In particular, they noted the importance of integrating consultation on policy options with the process for developing the President’s annual budget request. In addition, they urged a broader emphasis on consideration of policy options that meet the full range of program objectives, including, for example, ending hunger, improving nutrition, and supporting work. We agree that simplification options should be discussed in the larger context of achieving program objectives. However, we believe that an early dialogue about the advantages and disadvantages of simplification options will facilitate the congressional debate on one of the most important and controversial issues for reauthorizing the Food Stamp Program. Copies of this report will be sent to the congressional committees and subcommittees responsible for the Food Stamp Program; the Honorable Jacob Lew, Director, Office of Management and Budget; and other interested parties. We will also make copies available upon request. Please contact me at (202) 512-5138 if you or your staff have any questions about this report. Key contributors to this report are listed in appendix VI. To examine states’ efforts to minimize food stamp payment errors, we analyzed information obtained through structured telephone interviews with state food stamp officials in 28 states. We selected the 28 states to include states with the lowest payment error rates, states with the highest error rates, and the 10 states with the most food stamp participants in fiscal year 1999. Overall, the states we interviewed included 14 states with payment error rates below the national average and 14 states with error rates above the national average. They delivered about 74 percent of all food stamp benefits in fiscal year 1999. We supplemented the structured interviews with information obtained from visits to Maryland, Massachusetts, Michigan, and Texas. To examine what the Department of Agriculture’s Food and Nutrition Service (FNS) has done and could do to help states reduce food stamp payment errors, we relied in part on information obtained from our telephone interviews, as well as with information obtained from discussions with officials at FNS’ headquarters and each of its seven regional offices. We also analyzed FNS documents and data from its quality control system. exceeding 130 percent of the monthly poverty income guideline for its household size. To qualify for this option, a state must have a certification period of 6 months or more. The threshold reporting waiver raises the earned income changes that households must report to more than $100 per month. (Households still must report if a member gains or loses a job.) Without this waiver, households would be required to report any wage or salary change of $25 or more per month. The status reporting waiver limits the income changes that households must report to three key events: (1) gaining or losing a job, (2) moving from part-time to full-time employment or vice versa, and (3) a change in the wage rate or salary. The quarterly reporting waiver eliminates the need for households with earned income to report any changes during a 3-month period, provided the household provides required documentation at the end of the period. The 5-hour reporting waiver limits changes that households must report to three key events: (1) gaining or losing a job; (2) a change in wage rate or salary; and (3) a change in hours worked of more than 5 hours per week, if this change is expected to continue for more than a month. In addition to those named above, Christine Frye, Debra Prescott, and Michelle Zapata made key contributions to this report. The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: [email protected] 1-800-424-5454 (automated answering system) | In fiscal year 2000, the Department of Agriculture's Food Stamp Program, administered jointly by the Food and Nutrition Service (FNS) and the states, provided $15 billion in benefits to an average of 17.2 million low-income persons each month. FNS, which pays the full cost of food stamp benefits and half of the states' administrative costs, promulgates program regulations and oversees program implementation. The states run the program, determining whether households meet eligibility requirements, calculating monthly benefits the households should receive, and issuing benefits to participants. FNS assesses the accuracy of states' efforts to determine eligibility and benefits levels. Because of concerns about the integrity of Food Stamp Program payments, GAO examined the states' efforts to minimize food stamp payment errors and what FNS has done and could do to encourage and assist the states reduce such errors. GAO found that all 28 states it examined had taken steps to reduce payment errors. These steps included verifying the accuracy of benefit payments calculated through supervisory and other types of casefile reviews, providing specialized training for food stamp workers, analyzing quality control data to determine causes of errors and developing corrective actions, matching food stamp rolls with other federal and state computer databases to identify ineligible participants, and using computer software to assist caseworkers in determining benefits. To reduce payment errors, FNS has imposed financial sanctions on states with high error rates and has waived some reporting requirements. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
On October 25, 1995, Americans were reminded of the dangers that drivers/passengers often face when they travel over railroad crossings in the United States. On that day, in Fox River Grove, Illinois, seven high school students were killed when a commuter train hit a school bus. The potential for tragedies like the one at Fox River Grove is significant—the United States has over 168,000 public highway-railroad intersections. The types of warning for motorists at these crossings range from no visible devices to active devices, such as lights and gates. About 60 percent of all public crossings in the United States have only passive warning devices—typically, highway signs known as crossbucks. In 1994, this exposure resulted in motor vehicle accidents at crossings that killed 501 people and injured 1,764 others. Many of these deaths should have been avoided, since nearly one-half occurred at crossings where flashing lights and descended gates had warned motorists of the approaching danger. In August 1995, we issued a comprehensive report on safety at railroad crossings. We reported that the federal investment in improving railroad crossing safety had noticeably reduced the number of deaths and injuries. Since the Rail-Highway Crossing Program—also known as the section 130 program—was established in 1974, the federal government has distributed about $5.5 billion (in 1996 constant dollars) to the states for railroad crossing improvements. This two-decade investment, combined with a reduction in the total number of crossings since 1974, has significantly lowered the accident and fatality rates—by 61 percent and 34 percent, respectively. However, most of this progress occurred during the first decade, and since 1985, the number of deaths has fluctuated between 466 and 682 each year (see app. 1). Since 1977, the federal funding for railroad crossing improvements has also declined in real terms. Consequently, the question for future railroad crossing safety initiatives will be how best to target available resources to the most cost-effective approaches. Our report discussed several strategies for targeting limited resources to address railroad crossing safety problems. The first strategy is to review DOT’s current method of apportioning section 130 funds to the states. Our analysis of the 1995 section 130 apportionments found anomalies among the states in terms of how much funding they received in proportion to three key risk factors: accidents, fatalities, and total crossings. For example, California received 6.9 percent of the section 130 funds in 1995, but it had only 4.8 percent of the nation’s railroad crossings, 5.3 percent of the fatalities, and 3.9 percent of the accidents. Senators Lugar and Coats have proposed legislation to change the formula for allocating section 130 funds by linking the amounts of funding directly to the numbers of railroad crossings, fatalities, and accidents. Currently, section 130 funds are apportioned to each state as a 10-percent set-aside of its Surface Transportation Program funds. The second means of targeting railroad crossing safety resources is to focus the available dollars on the strategies that have proved most effective in preventing accidents. These strategies include closing more crossings, using innovative technologies at dangerous crossings, and emphasizing education and enforcement. Clearly, the most effective way to improve railroad crossing safety is to close more crossings. The Secretary of Transportation has restated FRA’s goal of closing 25 percent of the nation’s railroad crossings, since many are unnecessary or redundant. For example, in 1994, the American Association of State Highway and Transportation Officials found that the nation had two railroad crossings for every mile of track and that in heavily congested areas, the average approached 10 crossings for every mile. However, local opposition and localities’ unwillingness to provide a required 10-percent match in funds have made it difficult for the states to close as many crossings as they would like. When closing is not possible, the next alternative is to install traditional lights and gates. However, lights and gates provide only a warning, not positive protection at a crossing. Hence, new technologies such as four-quadrant gates with vehicle detectors, although costing about $1 million per crossing, may be justified when accidents persist at signalled crossings. The Congress has funded research to develop innovative technologies for improving railroad crossing safety. Although installing lights and gates can help to prevent accidents and fatalities, it will not preclude motorists from disregarding warning signals and driving around descended gates. Many states, particularly those with many railroad crossings, face a dilemma. While 35 percent of the railroad crossings in the United States have active warning devices, 50 percent of all crossing fatalities occurred at these locations. To modify drivers’ behavior, DOT and the states are developing education and enforcement strategies. For example, Ohio—a state with an active education and enforcement program—cut the number of accidents at crossings with active warning devices from 377 in 1978 to 93 in 1993—a 75-percent reduction. Ohio has used mock train crashes as educational tools and has aggressively issued tickets to motorists going around descended crossing gates. In addition, DOT has inaugurated a safety campaign entitled “Always Expect a Train,” while Operation Lifesaver, Inc., provides support and referral services for state safety programs. DOT’s educational initiatives are part of a larger plan to improve railroad crossing safety. In June 1994, DOT issued a Grade Crossing Action Plan, and in October 1995, it established a Grade Crossing Safety Task Force. The action plan set a national goal of reducing the number of accidents and fatalities by 50 percent from 1994 to 2004. As we noted in our report, whether DOT attains the plan’s goal will depend, in large part, on how well it coordinates the efforts of the states and railroads, whose contributions to implementing many of the proposals are critical. DOT does not have the authority to direct the states to implement many of the plan’s proposals, regardless of how important they are to achieving DOT’s goal. Therefore, DOT must rely on either persuading the states that implementation is in their best interests or providing them with incentives for implementation. In addition, the success of five of the plan’s proposals depends on whether DOT can obtain the required congressional approval to use existing funds in ways that are not allowable under current law. The five proposals would (1) change the method used to apportion section 130 funds to the states, (2) use Surface Transportation Program funds to pay local governments a bonus to close crossings, (3) eliminate the requirement for localities to match a portion of the costs associated with closing crossings, (4) establish a $15 million program to encourage the states to improve rail corridors, and (5) use Surface Transportation Program funds to increase federal funding for Operation Lifesaver. Finally, the action plan’s proposals will cost more money. Secretary Pena has announced a long-term goal of eliminating 2,250 crossings where the National Highway System intersects Principal Rail Lines. Both systems are vital to the nation’s interstate commerce, and closing these crossings is generally not feasible. The alternative is to construct a grade separation—an overpass or underpass. This initiative alone could cost between $4.5 billion and $11.3 billion—a major infrastructure investment. DOT established the Grade Crossing Safety Task Force in the aftermath of the Fox River Grove accident, intending to conduct a comprehensive national review of highway-railroad crossing design and construction measures. On March 1, 1996, the task force reported to the Secretary that “improved highway-rail grade crossing safety depends upon better cooperation, communication, and education among responsible parties if accidents and fatalities are to be reduced significantly.” The report provided 24 proposals for five problem areas it reviewed: (1) highway traffic signals that are supposed to be triggered by oncoming trains; (2) roadways where insufficient space is allotted for vehicles to stop between a road intersection and nearby railroad tracks; (3) junctions where railroad tracks are elevated above the surface of the roadway, exposing vehicles to the risk of getting hung on the tracks; (4) light rail transit crossings without standards for their design, warning devices, or traffic control measures; and (5) intersections where slowly moving vehicles, such as farm equipment, frequently cross the tracks. Under the Federal Railroad Safety Act of 1970, as amended, FRA is responsible for regulating all aspects of railroad safety. FRA’s safety mission includes 1) establishing federal rail safety rules and standards; 2) inspecting railroads’ track, signals, equipment, and operating practices; and 3) enforcing federal safety rules and standards. The railroads are primarily responsible for inspecting their own equipment and facilities to ensure compliance with federal safety regulations, while FRA monitors the railroads’ actions. We have issued many reports identifying weaknesses in FRA’s railroad safety inspection and enforcement programs. For example, in July 1990, we reported on FRA’s progress in meeting the requirements, set forth in the Federal Railroad Safety Authorization Act of 1980, that FRA submit to the Congress a system safety plan to carry out railroad safety laws. The act directed FRA to (1) develop an inspection methodology that considered carriers’ safety records, the location of population centers, and the volume and type of traffic using the track and (2) give priority to inspections of track and equipment used to transport passengers and hazardous materials. The House report accompanying the 1980 act stated that FRA should target safety inspections to high-risk track—track with a high incidence of accidents and injuries, located in populous urban areas, carrying passengers, or transporting hazardous materials. In our 1990 report, we found that the inspection plan that FRA had developed did not include data on passenger and hazardous materials routes—two important risk factors. In an earlier report, issued in April 1989, we noted problems with another risk factor—accidents and injuries. We found that the railroads had substantially underreported and inaccurately reported the number of accidents and injuries and their associated costs. As a result, FRA could not integrate inspection, accident, and injury data in its inspection plan to target high-risk locations. In our 1994 report on FRA’s track safety inspection program, we found that FRA had improved its track inspection program and that its strategy for correcting the weaknesses we had previously identified was sound. However, we pointed out that FRA still faced challenges stemming from these weaknesses. First, it had not obtained and incorporated into its inspection plan site-specific data on two critical risk factors—the volume of passenger and hazardous materials traffic. Second, it had not improved the reliability of another critical risk factor—the rail carriers’ reporting of accidents and injuries nationwide. FRA published a notice of proposed rulemaking in August 1994 on methods to improve rail carriers’ reporting. In February 1996, FRA reported that it intended to issue a final rule in June 1996. To overcome these problems, we recommended that FRA focus on improving and gathering reliable data to establish rail safety goals. We specifically recommended that FRA establish a pilot program in one FRA region to gather data on the volume of passenger and hazardous materials traffic and correct the deficiencies in its accident/injury database. We recommended a pilot program in one FRA region, rather than a nationwide program, because FRA had expressed concern that a nationwide program would be too expensive. The House and Senate Appropriations Conference Committee echoed our concerns in its fiscal year 1995 report and directed the agency to report to the Committees by March 1995 on how it intended to implement our recommendations. In its August 1995 response to the Committees, FRA indicated that the pilot program was not necessary, but it was taking actions to correct the deficiencies in the railroad accident/injury database. For example, FRA had allowed the railroads to update the database using magnetic media and audited the reporting procedures of all the large railroads. We also identified in our 1994 report an emerging traffic safety problem—the industry’s excessive labeling of track as exempt from federal safety standards. Since 1982, federal track safety standards have not applied to about 12,000 miles of track designated by the industry as “excepted;” travel on such track is limited to 10 miles per hour, no passenger service is allowed, and no train may carry more than five cars containing hazardous materials. We found in our 1994 report that the number of accidents on excepted track had increased from 22 in 1988 to 65 in 1992—a 195-percent increase. Similarly, the number of track defects cited in FRA inspections increased from 3,229 in 1988 to 6,057 in 1992. However, with few exceptions, FRA cannot compel railroads to correct these defects. According to FRA, the railroads have applied the excepted track provision far more extensively than envisioned. For example, railroads have transported hazardous materials through residential areas on excepted track or intentionally designated track as excepted to avoid having to comply with minimum safety regulations. In November 1992, FRA announced a review of the excepted track provision with the intent of making changes. FRA viewed the regulations as inadequate because its inspectors could not write violations for excepted track and railroads were not required to correct defects on excepted track. FRA stated that changes to the excepted track provision would occur as part of its rulemaking revising all track safety standards. In February 1996, FRA reported that the task of revising track safety regulations would be taken up by FRA’s Railroad Safety Advisory Committee. FRA noted that this committee would begin its work in April 1996 but did not specify a date for completing the final rulemaking. The Congress had originally directed FRA to complete its rulemaking revising track safety standards by September 1994. In September 1993, we issued a report examining whether Amtrak had effective procedures for inspecting, repairing, and maintaining its passenger cars to ensure their safe operation and whether FRA had provided adequate oversight to ensure the safety of passenger cars. We found that Amtrak had not consistently implemented its inspection and preventive maintenance programs and did not have clear criteria for determining when a passenger car should be removed from service for safety reasons. In addition, we found that Amtrak had disregarded some standards when parts were not available or there was insufficient time for repairs. For example, we observed that cars were routinely released for service without emergency equipment, such as fire extinguishers. As we recommended, Amtrak established a safety standard that identified a minimum threshold below which a passenger car may not be operated, and it implemented procedures to ensure that a car will not be operated unless it meets this safety standard. In reviewing FRA’s oversight of passenger car safety (for both Amtrak and commuter rail), we found that FRA had established few applicable regulations. As a result, its inspectors provided little oversight in this important safety area. For more than 20 years, the National Transportation Safety Board has recommended on numerous occasions that FRA expand its regulations for passenger cars, but FRA has not done so. As far back as 1984, FRA told the Congress that it planned to study the need for standards governing the condition of safety-critical passenger car components. Between 1990 and 1994, train accidents on passenger rail lines ranged between 127 and 179 accidents each year (see app. 2). In our 1993 report, we maintained that FRA’s approach to overseeing passenger car safety was not adequate to ensure the safety of the over 330 million passengers who ride commuter railroads annually. We recommended that the Secretary of Transportation direct the FRA Administrator to study the need for establishing minimum criteria for the condition of safety-critical components on passenger cars. We noted that the Secretary should direct the FRA Administrator to establish any regulations for passenger car components that the study shows to be advisable, taking into account any internal safety standards developed by Amtrak or others that pertain to passenger car components. However, FRA officials told us at the time that the agency could not initiate the study because of limited resources. Subsequently, the Swift Rail Development Act of 1994 required FRA to issue initial passenger safety standards within 3 years of the act’s enactment and complete standards within 5 years. In 1995, FRA referred the issue to its Passenger Equipment Safety Working Group consisting of representatives from passenger railroads, operating employee organizations, mechanical employee organizations, and rail passengers. The working group held its first meeting in June 1995. An advance notice of proposed rulemaking is expected in early 1996, and final regulations are to be issued in November 1999. Given the recent rail accidents, FRA could consider developing standards for such safety-critical components as emergency windows and doors and safety belts as well as the overall crashworthiness of passenger cars. In conclusion, safety at highway-railroad crossings, the adequacy of track safety inspections and enforcement, and the safety of passenger cars operated by commuter railroads and Amtrak will remain important issues for Congress, FRA, the states, and the industry to address as the nation continues its efforts to prevent rail-related accidents and fatalities. Note 1: Analysis includes data from Amtrak, Long Island Rail Road, Metra (Chicago), Metro-North (New York), Metrolink (Los Angeles), New Jersey Transit, Northern Indiana, Port Authority Trans-Hudson (New York), Southeastern Pennsylvania Transportation Authority and Tri-Rail (Florida). Note 2: Data for Amtrak include statistics from several commuter railroads, including Caltrain (California), Conn DOT, Maryland Area Rail Commuter (excluding those operated by CSX), Massachusetts Bay Transportation Authority, and Virginia Railway Express. Railroad Safety: FRA Needs to Correct Deficiencies in Reporting Injuries and Accidents (GAO/RCED-89-109, Apr.5,1989). Railroad Safety: DOT Should Better Manage Its Hazardous Materials Inspection Program (GAO/RCED-90-43, Nov.17, 1989). Railroad Safety: More FRA Oversight Needed to Ensure Rail Safety in Region 2 (GAO/RCED-90-140, Apr. 27, 1990). Railroad Safety: New Approach Needed for Effective FRA Safety Inspection Program (GAO/RCED-90-194, July 31, 1990). Financial Management: Internal Control Weaknesses in FRA’s Civil Penalty Program (GAO/RCED-91-47, Dec.26, 1990). Railroad Safety: Weaknesses Exist in FRA’s Enforcement Program (GAO/RCED-91-72, Mar.22, 1991). Railroad Safety: Weaknesses in FRA’s Safety Program (GAO/T-RCED-91-32, Apr. 11, 1991). Hazardous Materials: Chemical Spill in the Sacramento River (GAO/T-RCED-91-87, July 31, 1991). Railroad Competitiveness: Federal Laws and Policies Affect Railroad Competitiveness (GAO/RCED-92-16, Nov. 5, 1991) Railroad Safety: Accident Trends and FRA Safety Programs (GAO/T-RCED-92-23, Jan.13, 1992). Railroad Safety: Engineer Work Shift Length and Schedule Variability (GAO/RCED-92-133, Apr. 20, 1992). Amtrak Training: Improvements Needed for Employees Who Inspect and Maintain Rail Equipment (GAO/RCED-93-68, Dec.8, 1992). Amtrak Safety: Amtrak Should Implement Minimum Safety Standards for Passenger Cars (GAO/RCED-93-196, Sep.22, 1993). Railroad Safety: Continued Emphasis Needed for an Effective Track Safety Inspection Program (GAO/RCED-94-56, Apr.22, 1994). Amtrak’s Northeast Corridor: Information on the Status and Cost of Needed Improvements (GAO/RCED-95-151BR, Apr. 13, 1995). Railroad Safety: Status of Efforts to Improve Railroad Crossing Safety (GAO/RCED-95-191, Aug. 3, 1995). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO provided information on the safety of highway railroad crossings, commuter passenger rails and adequacy of track safety inspections. GAO found that: (1) the leading cause of death associated with the railroad industry involved railroad crossing accidents; (2) about half of rail-related deaths occur because of collisions between trains and vehicles at public railroad crossings; (3) in 1994, 501 people were killed and 1,764 injured in railroad crossing accidents; (4) to improve the safety of railroad crossings, the Department of Transportation (DOT) must better target funds to high-risk areas, close more railroad crossings, install new technologies, and develop educational programs to increase the public's awareness of railroad crossings; (5) DOT plans are costly and will require congressional approval; (6) the Federal Railroad Administration (FRA) is unable to adequately inspect and enforce truck safety standards or direct transportation officials to the routes with the highest accident potential because its database contains inaccurate information; and (7) Congress has directed FRA to establish sufficient passenger car safety standards by 1999. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
From 1996 through 2000, NASA was one of the few agencies to be judged by its independent auditor at that time, Arthur Andersen, as meeting all of the federal financial reporting requirements. That is, NASA was one of the few agencies to receive an unqualified, or “clean,” opinion on its financial statements, with no material internal control weaknesses noted, and no indications that its financial management systems were not in substantial compliance with the requirements of FFMIA. FFMIA reflects the need for agencies to have systems that produce reliable, timely, and accurate financial information needed for day-to-day decision making by requiring agencies to implement and maintain financial management systems that substantially comply with (1) federal financial management systems requirements, (2) the U.S. Government Standard General Ledger (SGL) at the transaction level, and (3) applicable federal accounting standards. Thus, the auditor’s report implied that NASA could not only generate reliable information once a year for external financial reporting purposes but also could provide the kind of information needed for day-to- day management decision making. However, as we and others have reported, the independent auditor’s reports did not provide an accurate picture of NASA’s financial management systems and, instead, failed to disclose pervasive financial management problems that existed at NASA. For example, we have identified NASA’s contract management function as an area of high risk since 1990 because of NASA’s inability to (1) oversee its contractors and their financial and program performance, and (2) implement a modern, integrated financial management system, which is integral to producing accurate and reliable financial information needed to support contract management. Also, in February 2002, NASA’s new independent auditor, PricewaterhouseCoopers, further confirmed NASA’s financial management difficulties and disclaimed an opinion on the agency’s fiscal year 2001 financial statements. The audit report also identified a number of material internal control weaknesses—primarily regarding PP&E and materials— and stated that, contrary to previous financial audit reports, NASA’s financial management systems did not substantially comply with FFMIA. While NASA received an unqualified opinion for its fiscal year 2002 financial statements, these results were achieved only through heroic efforts on the part of NASA and its auditor and again, the audit report identified a number of material internal control weaknesses and stated that NASA’s financial management systems did not substantially comply with FFMIA. To its credit, in April 2000, NASA began an effort known as IFMP. The schedule for implementing IFMP was originally planned for fiscal year 2008, but after NASA’s new Administrator came on board in fiscal year 2002, the timeline was accelerated to fiscal year 2006, with the core financial module to be completed in fiscal year 2003. NASA’s IFMP includes nine module projects supporting a range of financial, administrative, and functional areas. According to NASA officials, of the nine module projects, five are in operation, one is currently in implementation, and three are future modules. The five modules in operation are resume management, position description management, travel management, executive financial management information (called Erasmus), and core financial; the one project in implementation is budget formulation; and the three future module projects are human resources, asset management, and contract administration. The core financial module, which utilizes the SAP R/3 system, is considered the backbone of IFMP and has become NASA’s standard, integrated accounting system used agencywide. The other IFMP module projects will be integrated/interfaced with the core financial module, where applicable. The Joint Financial Management Improvement Program (JFMIP) defines a core financial system (or module) as the backbone of an agency’s integrated financial management system: It should provide common processing routines, support common data for critical financial management functions affecting the entire agency, and maintain the required financial data integrity control over financial transactions, resource balances, and other financial systems. A core financial system should support an agency’s general ledger, funds management, payment, receivable, and cost management functions. Also, the system should receive data from other financial-related systems, such as inventory and property systems, and from direct user input, and it should provide data for financial statement preparation and for financial performance measurement and analysis. The scope of NASA’s core financial module includes the general ledger, budget execution, purchasing, accounts receivable, accounts payable, and cost management. NASA completed implementation of the core financial module at all 10 NASA centers in June 2003. The pilot for the core financial module—conducted at Marshall Space Flight Center—was implemented in October 2002. NASA then deployed the core financial module at the other 9 NASA centers in three “waves,” the last of which was completed in June 2003. In April 2003, we issued our first report on IFMP in response to your request. At that time, we reported that NASA was not following key best practices for acquiring and implementing the system, which may affect the agency’s ability to fully benefit from the new system’s capabilities. Specifically, we reported that NASA (1) did not analyze the relationships among selected and proposed IFMP components, (2) had deferred addressing the needs of key system stakeholders, including program managers and cost estimators, and (3) did not properly manage and test its system requirements prior to implementation of the core financial module. As a result, we reported that: NASA has increased its risks of implementing a system that will not optimize mission performance, and will cost more and take longer to implement than necessary; the core financial module is not being designed to integrate the cost and schedule data that program managers need to oversee the work of NASA’s contractors; and costly rework will likely be required to fix requirement defects not identified prior to implementation. Although NASA has met the core financial management module’s implementation schedule, the system as implemented in June 2003 has limited external financial reporting capabilities. When NASA announced in June 2003 that the core financial management module was complete, NASA officials acknowledged that additional work remained, including the need to develop and configure a cost-allocation structure within the system so that it would accumulate the full cost of NASA’s programs and projects for external financial reporting purposes. However, to meet its implementation schedule, we also found that NASA (1) deferred requirements that require significant business process reengineering or extensive software configuration and (2) continues to rely on manual procedures for many transactions that should be automated in the new system. Consequently, only about one-third of the transaction types that NASA uses in its business processes are currently implemented and fully automated in the core financial module. As part of its implementation strategy, NASA delayed conversion to full- cost accounting until the core financial module was implemented at all centers. After completing implementation of the module in June 2003, NASA began designing the agency’s new cost-allocation structure and expected that full-cost accounting capabilities needed to provide the full cost of its programs and projects for external financial reporting purposes would be available through the core financial module by October 1, 2003. Properly designing, configuring, and testing the cost-allocation structure is key to capturing the full costs of all direct and indirect resources and allocating them to NASA’s programs and activities. However, on May 30, 2003, NASA’s Inspector General reported that NASA had not yet determined how to allocate space shuttle program costs to programs that benefit from space shuttle services or how to allocate civil service personnel costs to benefiting programs and projects. Once these issues were resolved, NASA would then have to configure the core financial module software to accommodate the new allocation structure and properly test the new configuration. Consequently, NASA’s Inspector General expressed concerns about NASA’s ability to meet its October 1, 2003, target date. In early October, we inquired about the status of full-cost accounting within the core financial module and IFMP officials told us that this capability would be fully implemented on October 26, 2003. However, because of the timing of this report, we did not verify whether this implementation date was met. If NASA is successful in implementing full-cost accounting, the new system should link all of NASA’s direct and indirect costs to specific programs and projects, and for the first time shed light on the full cost of these programs for external financial reporting purposes. As explained later, managerial cost accounting goes beyond providing the full cost of programs and projects and producing external financial reports, and is also critical for producing the type of cost information needed to effectively manage and oversee NASA’s programs. NASA did not adequately test key requirements or configure the core financial module software to satisfy these requirements prior to implementing the module. Adequately testing and configuring a system prior to implementation helps assure the integrity and effectiveness of transactions that will be processed through the system, thereby reducing the likelihood of rejected transactions, labor-intensive manual workarounds, and inaccurate data. However, prior to implementation, NASA tested only 120, or 53 percent, of the 225 unique financial events or transaction types identified by NASA as critical for carrying out day-to-day operations and producing external financial reports. NASA deferred implementation of the remaining 105 transaction types until after June 23, 2003, when the system would be implemented at all centers. Ideally, all transactions should be thoroughly tested prior to implementing a system. However, to meet the agency’s implementation schedule, NASA identified and deferred implementation of transactions that it determined would not have a significant or immediate impact on operations. For example, 29 of the deferred transactions were related to year-end closing procedures that would not be needed until September 30, 2003. However, other deferred transactions do have a significant and immediate impact on NASA’s operations throughout the year. For example, 40 transaction types were related to upward and downward adjustments to prior year data, many of which affected NASA’s ability to properly capture adjustments to obligations. Because NASA deferred implementing this capability, the agency has continued to rely on ad hoc, manual processes and “workarounds.” As discussed later, these are the same cumbersome manual processes that resulted in a $644 million error in NASA’s fiscal year 1999 financial statements. NASA hoped to implement most of these deferred transactions by October 2003. In mid-October, NASA officials told us that 75 of the 105 deferred transaction types had been implemented, and the remaining 30 transaction types would be implemented later in fiscal year 2004. Until the remaining transaction types are implemented, however, NASA must continue to process them outside of the module using manual procedures. In addition to the 105 transaction types that NASA has deferred, NASA also uses manual accounting entries to record 43, or 36 percent, of the 120 unique transaction types NASA considers implemented. NASA considers these 43 transaction types implemented because NASA has no current plans to automate them in the core financial module. Although manual accounting entries are sometimes necessary to record unusual or infrequent events, many of NASA’s manual entries are made to record routine events that should be processed electronically. For example, NASA uses summary-level manual processes to record all transactions occurring throughout the year related to its reported $37 billion of property. Such a large proportion of manual procedures runs contrary to the purpose of an automated system and makes the agency more vulnerable to processing errors and delays. In fact, prior to implementation, NASA’s consultant responsible for performing an independent compliance review of the core financial module raised concerns about the excessive number of transactions processed with manual journal voucher entries. Despite these concerns, NASA did not alter its implementation plan for the module. The core financial module may provide some improvements to NASA’s current accounting system environment by reducing the extensive amount of time and resources currently required to consolidate NASA’s 10 different reporting entities and close the books each accounting period. However, NASA did not thoroughly test or implement key requirements prior to implementation and has not used the new system as an opportunity to drive needed changes in its management practices and business processes. Therefore, the core financial module, as implemented in June 2003, does not (1) properly capture, record, and account for PP&E and materials balances or (2) provide key system requirements needed to prepare the agency’s Statement of Budgetary Resources. The core financial module, as implemented in June 2003, does not appropriately capture and record PP&E and material in the module’s general ledger at the transaction level. According to SGL requirements and NASA’s own accounting policy, recording PP&E and material in the general ledger at the transaction or item level provides independent control over these assets. However, NASA currently updates the core financial module’s general ledger using periodic summary-level manual entries. Although NASA plans to implement an integrated asset management module in 2005, this alone will not ensure that transaction-level detail is used to update the core financial module. NASA’s PP&E and materials are physically located at many locations throughout the world, including NASA centers, contractor facilities, other private or government run facilities, and in space. NASA’s most significant challenge, with respect to property accounting, stems from property located at contractor facilities, which accounts for almost $11 billion, or about one-third, of NASA’s reported $37 billion of PP&E and materials and consists primarily of equipment being constructed for NASA or items built or purchased for use in the construction process. NASA has not reengineered the agency’s processes for capturing contract costs associated with PP&E and material, though, and therefore, does not record these property costs in the general ledger at the transaction level. Instead, according to NASA officials, the agency plans to continue to (1) record the cost of PP&E and materials as expenses when initially incurred, (2) periodically determine which of those costs should have been capitalized, and (3) manually correct these records at a summary level. To illustrate, NASA’s contractors provide NASA with monthly contractor cost reports, which contain accrued cost information for any work performed during the month. However, these reports do not contain enough information for NASA to determine what portion of the reported cost pertains to the construction or acquisition of property and therefore, NASA initially records all costs reported by its contractors as an expense. Then, on a quarterly or annual basis, NASA receives a property report from its contractors that provides summary-level information on the amount of property constructed or purchased and currently in the contractor’s possession. Based on these reports, NASA records the cost of contractor-held assets in its general ledger and reverses the expense previously recorded from the contractor cost reports. The problem with NASA’s current process for capturing, recording, and accounting for property in the possession of contractors is that it provides no way for NASA to ensure that the money it spends on the construction of its property is actually recorded as discrete property items. Although NASA plans to implement an integrated asset management module in 2005, the new system will not change the way NASA captures, records, and accounts for property in the possession of contractors. As noted above, because this problem stems from NASA’s inability to link accrued costs reported by its contractors with specific equipment items being constructed, the problem will not be alleviated when physical custody of the equipment is ultimately transferred to NASA and recorded in NASA’s property records. The core financial module does not capture and report certain key budgetary information needed to prepare the agency’s Statement of Budgetary Resources. Although the software that NASA purchased for the core financial module was certified by JFMIP as meeting all mandatory system requirements, NASA may have relied too heavily on the JFMIP certification. JFMIP has made it clear that its certification, by itself, does not automatically ensure compliance with the goals of FFMIA. Other important factors that affect compliance with Federal Financial Management System Requirements (FFMSR) include how well the software has been configured to work in the agency’s environment and the quality of transaction data in the agency’s feeder systems. When NASA later tested specific requirements related to adjustments to prior year obligations, the core financial module failed the test. Consequently, NASA deferred implementation of those requirements and opted to rely on manual compilations, system queries, or other workarounds to compensate for the system’s inadequacies. These workarounds are known to have caused reporting problems in the past. According to FFMSR, an agency’s core financial module should automatically classify and record upward and downward adjustments of prior year obligations to the appropriate general ledger accounts. However, NASA’s core financial module, as implemented in June 2003, does not provide this capability. For example, if an upward adjustment is required because an invoice includes costs not previously included on the purchase order, such as shipping costs, the system erroneously posts the upward adjustment to a prior year obligation instead of a current year obligation. Because the system does not properly capture and report these adjustments, NASA must rely on manual compilations and system queries to extract the data needed to prepare the agency’s Statement of Budgetary Resources—just as it did using its legacy general ledger systems. As we reported in March 2001, this cumbersome, labor-intensive effort to gather the information needed at the end of each fiscal year was the underlying cause of a $644 million misstatement in NASA’s fiscal year 1999 Statement of Budgetary Resources. During its initial test of system requirements but prior to implementation at Marshall Space Flight Center and Glenn Research Center in October 2002, NASA became aware of the software’s limitations regarding upward and downward adjustments to prior year obligations. In order to meet its schedule, NASA IFMP officials deferred further system modifications to meet these requirements and opted to rely on a manual workaround to satisfy the federal requirement for upward and downward adjustments. NASA’s consultant responsible for performing an independent compliance review of the core financial module raised concerns about this approach. Despite these concerns, NASA went forward with its plans. At the time, NASA had hoped that a “patch” release or future software upgrade would remedy the problem and then NASA could incorporate the fix into the phased agency rollout of the core financial module. However, the upgrades incorporated after the initial implementation at Marshall and Glenn did not resolve all of the issues related to upward and downward adjustments. As a result, NASA continued to face significant problems in this area. According to NASA officials, the agency continued to work with the software vendor to reconfigure the software as necessary to accommodate adjustments to prior year obligations. NASA expected a new software patch to resolve any remaining problems by October 1, 2003. However, in mid-October, NASA officials acknowledged that it might be some time before this issue would be resolved completely. Until then, NASA will continue to rely on manual workarounds. NASA’s implementation of the core financial module has also created new reporting issues. Specifically, the core financial module does not appropriately capture accrued costs and record the corresponding liabilities as accounts payable. In addition, the core financial module records obligations to the general ledger before the obligations are legally binding. Although NASA knew about these problems prior to implementation, the agency went forward with its implementation plans. The core financial module, as implemented in June 2003, does not appropriately capture and record accrued contract costs and accounts payable information in accordance with federal accounting standards and NASA’s own financial management manual. Specifically, the core financial module does not capture accrued costs or record accounts payable if cumulative costs are in excess of obligations for a given contract. As of June 30, 2003, NASA had not processed approximately $245 million in costs that exceeded obligations, nor recorded the corresponding accounts payable, even though this amount represented a legitimate liability for NASA. Instead, these transactions are held outside of the general ledger in suspense until additional funds can be obligated. Thus, any report containing information on NASA’s costs or liabilities would likely be understated by the amount of costs held in suspense at the time of the report. Federal accounting standards and NASA’s own financial management manual require costs to be accrued in the period in which they are incurred and any corresponding liability recorded as an account payable, regardless of amounts obligated. Further, federal standards require that agencies must disclose unfunded accrued costs—or costs in excess of obligations. However, NASA has designed the core financial module such that it will not post costs to the general ledger if they exceed the amount obligated. According to NASA officials, this is intended to be a “red flag” or internal control that alerts agency managers to potential cost overruns. While we agree that NASA could benefit from information that provides an early warning sign of possible cost or schedule problems, we disagree with NASA’s approach. Appropriately posting costs and accounts payable to the general ledger does not preclude NASA from monitoring unfunded accrued costs. Further, as we reported in April 2003, to adequately oversee NASA’s contracts, program managers need reliable contract cost data—both budgeted and actual—and the ability to integrate this data with contract schedule information to monitor progress on the contract. However, because program managers were not involved in defining system requirements or reengineering business processes, the core financial module is not being designed to integrate cost and schedule data needed by program managers. The core financial module was intended to streamline many of NASA’s processes and eliminate the need for many paper documents. However, in some areas, the new system has actually increased NASA’s workload. Specifically, because the core financial software allows obligations to be posted to the general ledger before a binding agreement exists, NASA must process purchase orders and contract documents outside the system until they are signed, or otherwise legally binding. At that point, NASA initiates the procurement action in the system and repeats the steps that were manually performed outside the system previously. Federal law requires that no amount be recorded as an obligation unless it is supported by documentary evidence of, among other things, a binding agreement. However, the processes that are embedded in the core financial module for processing purchase orders and contract documents do not accommodate this requirement. To illustrate, authorized users create electronic purchase requests in the system and release or forward the request to the appropriate approving official for electronic signature. Once signed, the purchase request is forwarded electronically to the purchasing department where purchasing staff create an electronic purchase order, secure a vendor, and place the order. According to federal appropriations law, a purchase order constitutes an obligation when the order is placed and when all relevant parties sign the purchase order. However, if a purchase order is entered into the system before it is finalized, the module automatically records the obligation. Similarly, if a contract or contract modification is entered into the module before it is signed and legally binding, the module automatically records the obligation. According to NASA officials, they are working with the software vendor to develop a solution and expect that the new software upgrade to be released on October 1, 2004, will alleviate this problem. In the meantime, they will manually process documents outside of the system and monitor any documents that have been recorded without signatures to ensure that obligations are not overstated at month-end. The system limitations discussed previously related to full-cost accounting, property accounting, budgetary accounting, accrued costs, and accounts payable—combined with the findings from our April 2003 report—indicate that NASA’s new core financial module and related systems, as implemented in June 2003, do not substantially comply with the requirements of FFMIA. This act provides agencies a blueprint for building fully integrated financial management systems that routinely provide decision makers with timely, reliable, and useful financial information. FFMIA requires agencies to implement and maintain financial management systems that substantially comply with (1) FFMSR, (2) the SGL at the transaction level, and (3) applicable federal accounting standards. Although NASA has made progress in addressing some of its financial management system weaknesses, the agency’s core financial module does not yet provide all the building blocks needed to achieve the ultimate goal of FFMIA. The core financial module, as implemented in June 2003, does not comply substantially with FFMSR. To ensure that automated federal financial management systems comply with this standard and provide the critical information needed for decision making, JFMIP issued specific functional requirements that core financial systems must meet in order to substantially comply with FFMIA. Compliance with this standard, at a minimum, means the core financial module must be configured to (1) ensure consistent and accurate processing, reporting, and tracking of program expenditures and budgetary resources, and (2) ensure that transactions are processed and recorded in accordance with laws and regulations, and federal accounting standards. However, the core financial module—although it uses software certified by JFMIP—does not perform all mandatory functions. Specifically, the module: does not capture and record upward and downward adjustments of obligations incurred in prior fiscal years, and posts obligations to the general ledger prior to approval. Among other things, FFMSR requires federal financial management systems to produce accurate and reliable information for budgetary reports, including the Statement of Budgetary Resourcesand the Report on Budget Execution and Budgetary Resources (Standard Form 133). As previously discussed, the core financial module does not capture and record upward and downward adjustments of obligations incurred in prior fiscal years, which is essential for producing both the Statement of Budgetary Resources and Standard Form 133 reports. In addition, FFMSR requires federal financial management systems to process transactions in accordance with federal appropriations law, which states that no amount may be recorded as an obligation unless it has been approved and is supported by documentary evidence. As a result of system limitations we have discussed, the core financial module erroneously posts obligations to the general ledger prior to approval. The core financial module, as implemented in June 2003, does not substantially comply with the SGL at the transaction level. The SGL requirements ensure consistency in financial transaction processing and external reporting. Compliance with this standard, at a minimum, means that the core financial module must be configured such that (1) reports produced by the systems containing financial information can be traced directly to general ledger accounts, (2) transaction details supporting general ledger account balances are available and can be directly traced to specific general ledger accounts, and (3) the criteria (e.g., timing, processing rules/conditions) for recording financial events are consistent with accounting transaction definitions and processing rules defined in the SGL. As discussed previously, the core financial module does not accumulate transaction-based support for adjustments to prior year obligations, which is essential for producing the Statement of Budgetary Resources and Standard Form 133 reports. Instead, NASA must rely on estimates, manual compilations, and system queries to extract the data needed to prepare these required budgetary reports. As a result, key budgetary information reported on the Statement of Budgetary Resources and Standard Form 133 cannot be traced directly to NASA’s general ledger accounts. NASA also does not properly record PP&E and materials as assets when they are first acquired. Instead, NASA initially records these items as expenses and then later corrects these records using manual procedures. Although this manual process provides NASA a vehicle for reporting PP&E and material costs for financial statement reporting, it is not sufficient for compliance with the SGL. Finally, NASA does not maintain transaction-level detail for its contractor-held property. Instead, it relies solely on its contractors to maintain such records and to periodically report summary-level information on these assets to NASA. This situation has resulted in material weaknesses over this property, as previously reported by NASA’s current independent auditor. The core financial module and related systems, as implemented in June 2003, do not substantially comply with federal accounting standards. Compliance with these standards is essential to providing useful and reliable financial information to external and internal users. Federal accounting standards are the authoritative requirements that guide agencies in developing financial management systems, as well as preparing financial statements. However, as discussed previously, the core financial module did not, as of June 2003, process and report financial information in accordance with federal accounting standards. The major reasons for the module’s noncompliance with federal accounting standards are as follows. The core financial module does not comply with SFFAS No. 1, Accounting for Selected Assets and Liabilities. This standard states that a liability should be recognized and recorded as an account payable when contractors construct facilities or equipment for the government. The liability should be based on an estimate of work completed. However, the core financial module does not capture accrued costs or record accounts payable when the cumulative costs for a given contract exceed obligations. Instead, these transactions are held outside the general ledger, in suspense, until additional funds are obligated, thus understating NASA’s reported program costs and liabilities. The core financial module does not yet provide full-cost accounting capabilities in accordance with SFFAS No. 4, Managerial Cost Accounting Standards. This standard requires agencies to report the full cost of their programs in their general-purpose financial reports. However, as previously discussed, NASA, as of June 2003, had not defined, configured, or tested the appropriate cost pools and cost allocation structure, which are critical to implementing full-cost accounting. The core financial module does not comply with the broader objective of SFFAS No. 4, Managerial Cost Accounting Standards. The concepts and standards included in SFFAS No. 4 are aimed at achieving three general objectives: (1) providing program managers with relevant and reliable information relating costs to program outputs, (2) providing relevant and reliable cost information to assist the Congress and executives in making decisions about allocating federal resources and evaluating program performance, and (3) ensuring consistency between costs reported in general purpose financial reports and costs reported to program managers. However, as we reported in April 2003, the core financial module does not provide program managers, cost estimators, or the Congress with managerially relevant cost information that they need to effectively manage and oversee NASA’s contracts and programs. As a result, NASA’s continuing inability to provide its managers with timely, relevant data on the cost, schedule, and performance of its programs is a key reason that GAO continues to report NASA’s contract management as an area of high risk. Because this information is not available through the core financial module, program managers will continue to rely on hard copy reports, electronic spreadsheets, or other means to monitor contractor performance. Consequently, NASA risks operating with two sets of books—one that is used to report information in the agency’s general-purpose financial reports and another that is used by program managers to run NASA’s projects and programs. Compliance with federal accounting standards goes far beyond receiving a “clean” opinion on financial statements. A key indicator that an agency’s financial management systems do not substantially comply with federal accounting standards is the existence of material weaknesses in the agency’s internal controls. As noted earlier, NASA has not addressed material weaknesses in its internal controls and processes over PP&E and materials, which make up nearly 85 percent, or $37 billion, of NASA’s assets. Instead, NASA plans to rely on existing legacy systems and processes—including the extensive use of manual accounting entries—that the agency’s independent auditor has found to be inadequate for property accounting. As a result, NASA faces serious challenges in complying with these standards. Although NASA plans to implement an integrated asset management module in 2005, most of NASA’s issues related to property accounting have little to do with the lack of an integrated system. Instead, NASA faces two key challenges with respect to property accounting: (1) reengineering its processes for capturing and recording transaction-level detail in the core financial module’s general ledger and (2) addressing material weaknesses in its internal controls over property previously identified by NASA’s independent auditors. To date, NASA has yet to define specific requirements for its asset management module or determine how it plans to overcome the previously identified material weaknesses in NASA’s internal controls over PP&E and material. If NASA continues on its current track, the core financial module and IFMP will fail to achieve the agency’s stated objective of providing reliable, timely financial information for both internal management decision-making and external reporting purposes. Thus far, NASA has focused on deploying the system on its established schedule, rather than ensuring that it satisfies the agency’s internal management and external reporting requirements. To meet its schedule, NASA has put off addressing user requirements that would necessitate significant business process reengineering or extensive software configuration. While NASA is meeting its implementation milestones, it is only able to do so because the agency has deferred critical system capabilities, such as the ability to properly capture, record, and account for its PP&E and material; process budgetary accounting entries; and provide managerially relevant cost information. Until, and unless, the agency deals with these issues, NASA risks making a substantial investment in a system that will fall far short of its stated goal of providing meaningful information for both internal management and external reporting purposes. Based on the findings from this review, in conjunction with our April 2003 report, we reiterate our April 2003 recommendation that NASA: engage stakeholders—including program managers, cost estimators, and the Congress—in developing a complete and correct set of user requirements; and reengineer its acquisition management processes, particularly with respect to the consistency and detail of budget and actual cost and schedule data provided by contractors. We also recommend that the NASA Administrator direct the Program Executive Officer for IFMP to implement a corrective action plan in coordination with NASA’s Chief Financial Officer that will produce financial management systems that comply substantially with the requirements of FFMIA, including capabilities to produce timely, reliable, and useful financial information related to: property, plant, equipment, and materials; budgetary information including adjustments to prior year obligations; accounts payable and accrued costs; and the full cost of programs for financial reporting purposes. This plan should include time frames and details on how any changes will be monitored, tested, and documented. In written comments, reprinted in appendix II, NASA disagreed with all of our conclusions and recommendations in part because we reviewed the status of the core financial module as of June 23, 2003, instead of September 30, 2003—the date used for FFMIA reporting. Although NASA takes issue with the date of our review, it is important to note that we selected June 2003 because NASA represented that the core financial module was fully operational at all of its centers at that time. In making that representation, NASA officials acknowledged that, as part of their implementation strategy, they had not yet converted the system to support full-cost accounting. However, they did not disclose any other deferred capabilities. Moreover, NASA’s comments assert that for PP&E and budgetary reporting, the manual processes or workarounds it has developed to produce year- end balances for the agency’s annual financial statements also satisfy the requirements of FFMIA. We disagree with this assertion. The development of significant manual workarounds in these areas masks the fact that NASA’s core financial module is not designed to, and cannot, produce timely and reliable PP&E and budgetary data with traceability to transaction-based support. The ability to produce reliable numbers once a year for financial reporting purposes does not by itself constitute FFMIA compliance. In its written comments, NASA indicated that it has made changes to the module since June and that the core financial module as implemented in October 2003 has many of the capabilities that were lacking in the June 2003 module. Although we requested status updates between June and October to track NASA’s progress, we did not reassess the module’s capabilities as of October 2003. However, with the possible exception of full-cost accounting, which was planned for October 1, 2003, the changes NASA has cited still involve manual workarounds for producing year-end numbers. FFMIA goes beyond producing auditable financial statements once a year and requires financial systems that ensure accountability on an ongoing basis throughout the year. In response to our April 2003 recommendation, which we have restated in this report, to reengineer its acquisition management processes, particularly with respect to the consistency and detail of budgeted and actual cost and schedule data provided by contractors, NASA indicated that it is in the process of addressing a number of our concerns. Specifically, NASA stated that it (1) has extended the data structure embedded in the core financial module to capture more detailed cost data, (2) is currently assessing its contractor reporting requirements, and (3) is evaluating the possibility of accommodating contract cost and schedule data in an integrated environment. While it is too early to assess the significance or impact of NASA’s current effort, we are encouraged that NASA is considering the possibility of reengineering its acquisition management processes. This would be an important first step toward ensuring that NASA’s contractors provide the appropriate level and type of cost data needed for both internal management and external reporting purposes and that the core financial module is properly configured to support the agency’s information needs. However, we continue to believe it would have been more effective and efficient if NASA had conducted its assessment of contractor reporting requirements as part of a larger reengineering effort prior to configuration of the core financial module. Further, any effort that falls short of end-to-end business process reengineering will likely not result in a system that substantially improves the data available for contract oversight or ensures consistency between costs reported in general purpose financial reports and costs reported to program mangers. In its written comments, NASA also emphasized that the core financial module alone cannot meet all of the functional requirements needed to manage a program or to prepare cost estimates and asserts that applications such as Erasmus, an executive-level program performance reporting tool, will enable NASA to meet the full depth and breadth of user requirements. We agree that the core financial module alone cannot meet all of NASA’s information needs and that an executive-level reporting tool such as Erasmus may provide NASA executives with greater visibility over program performance. However, Erasmus does little to help program managers oversee contractor performance, and like the core financial module, may contain cost data that are not consistent or reconcilable with cost data used by program managers to manage contracts. The underlying problem, as we reported in April 2003, is that NASA uses one set of contractor-reported cost data to update the core financial module while program managers use a separate set of contractor-reported cost data that resides outside the system to monitor contractor performance. Consequently, the cost data maintained in the core financial module and reported in NASA’s external financial reports are not consistent or reconcilable with cost data used by program managers to manage contracts. Finally, NASA stated that the asset management module, scheduled for implementation in 2005, will make a significant contribution to its program management and cost estimating activities. This module is primarily intended to maintain detailed property records for NASA-held property. Thus, we do not believe an asset management module would have any impact on the cost, schedule, and performance data needed for program management and cost estimating. NASA disagreed with our recommendation related to IFMP’s ability to produce timely, reliable, and useful information for PP&E and materials in accordance with FFMIA requirements. NASA represented that its current processes for capturing and recording property for financial statement reporting purposes also meet the requirements of FFMIA because it has begun requiring more frequent and detailed property reporting by its 55 largest contractors. We disagree with NASA’s assertion. Because NASA’s current contractor cost-reporting processes do not provide the information needed to distinguish between capital and non-capital expenditures, NASA currently records as expenses all contractor costs as they are incurred and then manually adjusts previous entries to record assets based on periodic summary-level contractor property reports. While this process may satisfy NASA financial statement reporting needs, the development of significant manual workarounds in this area masks the fact that NASA’s core module is not designed to and cannot produce timely and reliable PP&E data with traceability to transaction-based support. The ability to produce reliable numbers once a year for financial reporting purposes does not equate to FFMIA compliance. In accordance with FFMSR, federal accounting standards, and the SGL, when an agency incurs costs for the purchase or construction of PP&E and material, those costs should be recorded in both the agency’s asset management system and its core financial management systems’ general ledger. The only difference for contractor-held property is that the asset management system belongs to the contractor. The asset management system, whether NASA’s or its contractors’, would maintain the agency’s detailed logistical property records for PP&E and materials—including information related to asset location, date of purchase, useful life, quantity, cost, and condition—and the core financial module’s general ledger would maintain a cumulative balance of all purchased or constructed property based on the cost incurred for individual items. The ability to reconcile detailed transactions in the asset management system with amounts recorded in the general ledger provides an efficient way to maintain independent general ledger control over these assets. As mentioned above, NASA first expenses all PP&E in the core financial module, and then later, makes adjustments to record the costs of PP&E as assets at a summary level. There is currently no traceability from the core financial module general ledger to the detailed logistical property records of PP&E and materials. NASA also stated that one of the objectives of the asset management module, now in formulation, is to significantly improve reporting for contractor-held property. While it is our understanding that NASA’s new asset management module, as planned, will maintain detailed property records for NASA-held property and be integrated with other IFMP modules, including the core financial module, we know of no plans to add contractor-held property to this system. In fact, the Federal Acquisition Regulation requires contractors to maintain the logistical property records for government property in their possession and prohibits government agencies from maintaining duplicate property records. Under these circumstances, as part of an overall effort to reengineer its acquisition management process, we believe that NASA must capture the cost and other information it needs from its contractors and develop traceability to contractor logistical records to ensure accountability over its contractor- held property on an ongoing basis. NASA disagreed with our recommendation regarding its ability to produce reliable, timely, and useful budgetary information, including adjustments to prior year obligations. NASA stated that although it identified certain transactional reporting limitations in its initial deployment of the core financial module, it developed alternative or “workaround” procedures to ensure the accurate and timely reporting of the identified transactions. However, as stated previously, we do not believe that the manual processes or workarounds NASA uses to produce year-end balances for the agency’s annual financial statements satisfy the requirements of FFMIA. While NASA’s written comments indicate that many of these deferred capabilities were largely enabled by September 30, 2003, they also indicate that more time will be required before the module can process adjustments to prior year obligations. As a result, NASA must use manual workarounds to process these transactions related to fiscal year 2003 activity. We note that these are the same manual procedures used to compensate for deficiencies in NASA’s legacy systems that resulted in the $644 million error in NASA’s fiscal year 1999 Statement of Budgetary Resources. NASA disagreed with our conclusion that its overall financial management system does not properly capture and report all accrued costs and accounts payable. However, we did not report that the information was not contained within the system; rather, we reported that it was not posted to the general ledger. We recognize that NASA records costs that exceed current obligations in the IFMP business warehouse until additional funds are obligated and in order to highlight or detect potential program cost overruns. While we encourage NASA’s effort to monitor costs in excess of obligations, we do not believe its method for doing so is appropriate. We continue to believe that these costs should be properly recorded in the general ledger in the period in which they are incurred. The risk in NASA’s method is that when costs and liabilities are not properly recorded in the general ledger, these balances are likely to be understated in any financial reports produced during the year, as well as at year-end. It is also important to note that comparing costs with obligations will not necessarily detect a cost overrun. For example, this strategy would not have alerted NASA to its largest cost overrun in recent years—the $5 billion cost growth in the International Space Station program reported in 2001. This overrun was not the result of incurring more costs than the funds obligated. Instead, it was due to the cost growth projected to occur in the future—i.e., growth in the estimated costs to complete the program. This cost overrun went undetected for a long period of time because of NASA’s deeply-rooted culture of managing programs based on current year budgets rather than total costs. As we reported in 2002, for NASA to manage its program costs properly, it needs to focus on the total costs of a program rather than just annual budgets. Thus, NASA’s plan to hold costs in suspense when they exceed obligations will not make such cost overruns any easier to detect or manage. Instead, as we reported in April 2003, to adequately oversee NASA’s contracts, program managers need reliable contract cost data—both budgeted and actual—and the ability to integrate these data with contract schedule information to monitor progress on the contract. However, because program managers were not involved in defining system requirements or reengineering business processes, the core financial module was not designed to integrate cost and schedule data needed by program managers. NASA also disagreed with our recommendation concerning its system’s ability to account for the full cost of its programs and asserted that it completed implementation of its full-cost accounting capability within IFMP as of October 1, 2003. However, IFMP management told us in early October that this capability would not become operational until October 26, 2003, after NASA completed its year-end closing procedures. Because of our reporting time frame, we did not conduct the detailed procedures that would have been necessary to determine whether or not this function had begun operating. As agreed with your offices, unless you announce its contents earlier, we will not distribute this report further until 30 days from its date. At that time, we will send copies to interested congressional committees, the NASA Administrator, and the Director of the Office of Management and Budget. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions concerning this report, please contact me at (202) 512-9505 or [email protected], Keith Rhodes at (202) 512- 6412 or [email protected], or Diane Handley at (404) 679-1986 or [email protected]. Key contributors to this report are acknowledged in appendix III. The objective of this report was to assess whether the National Aeronautics and Space Administration (NASA) Integrated Financial Management Program’s (IFMP) core financial module, as implemented on June 2003, would satisfy NASA’s external reporting requirements, such as reliable and auditable financial statements, congressional information needs, and other reporting requirements. Specifically, we assessed whether the core financial module (1) accurately accounts for Property, Plant, and Equipment (PP&E) and materials and supplies, (2) properly accounts for the full cost of NASA’s projects and programs, (3) captures and reports certain key budgetary information, (4) accurately records accounts payable, and (5) complies substantially with the requirements of the Federal Financial Management Improvement Act (FFMIA) of 1996. We did not assess other aspects of the core financial module’s capabilities. We interviewed officials from NASA’s financial management division and the NASA Office of Inspector General to identify various reporting requirements and weaknesses in meeting these requirements, and to determine how the core financial module will provide the data needed to meet these requirements. We evaluated fiscal year 2002 internal control weaknesses reported by PricewaterhouseCoopers, NASA’s independent auditors, related to PP&E, material and supplies, and financial reporting. However, for the purposes of this report we did not review the auditors’ underlying work paper support. We also reviewed NASA’s process for preparing the Statement of Budgetary Resources and reporting accounts payable, and any related issues identified by auditors. We reviewed applicable Treasury, Office of Management and Budget, and NASA guidance, and related federal accounting standards as well as federal financial management system requirements promulgated by the Joint Financial Management Improvement Program. At two NASA centers, we observed how transactions are recorded in the general ledger within the core financial module and discussed these processes with users of the system. We reviewed nonrepresentative selections of transactions for PP&E, materials, accounts payable, and budgetary transactions. We traced selected transactions to their source documents, and also traced selected source documents to the general ledger. We assessed whether transactions were recorded consistently with the Treasury Financial Manual. We also observed and discussed how information on contractor cost reports is recorded in the core financial module. We interviewed various officials from IFMP and its core financial project design and implementation teams, including the IFMP Deputy Program Director, the Core Financial Project Manager, and the Core Financial Deputy Project Manager to clarify our understanding of the core financial module’s functions and obtain the most recent information on the status of various implementation issues as of June 2003. We also reviewed relevant audit reports from the NASA IG and the results of an independent compliance review on the core financial module performed by NASA’s consultant. We performed our work primarily at NASA headquarters in Washington, D.C. and the two NASA centers—Marshall Space Center in Huntsville, Alabama and Glenn Research Center in Cleveland, Ohio—where the core financial module was implemented first. Our work was performed from April 2003 through September 2003 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the NASA Administrator or his designee. Written comments from the NASA Deputy Administrator are presented and evaluated in the “Agency Comments and Our Evaluation” section of this report and are reprinted in appendix II. Staff members who made key contributions to this report were Shawkat Ahmed, Fannie Bivins, Kristi Karls, Chris Martin, and Maria Storts. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading. | In April 2000, the National Aeronautics and Space Administration (NASA) began its Integrated Financial Management program (IFMP), its third attempt at modernizing its financial management processes and systems. In April 2003, GAO reported that NASA's acquisition strategy has increased the risk that the agency will implement a system that will cost more and do less than planned. This report is one of a series of reviews of NASA's acquisition and implementation of IFMP, and focuses on the core financial module's ability to provide the information necessary for external financial reporting. The core financial module of IFMP provides NASA its first agencywide accounting system--a significant improvement over the 10 disparate systems previously used. However, to meet IFMP's aggressive implementation schedule, NASA deferred testing and implementation of many key requirements of the core financial module. Consequently, when NASA announced, in June 2003, that this module was fully operational at each of its 10 centers, about two-thirds of the financial events or transaction types needed to carry out day-to-day operations and produce external financial reports had not been implemented in the module. NASA officials acknowledged that, as part of their implementation strategy, they had not yet converted the module to support full-cost accounting. In addition, we found that NASA also deferred implementation of other key core financial module capabilities. Because NASA did not use disciplined processes for defining, managing, and testing key system requirements, or substantially reengineer its business processes prior to implementation, the core financial module, as implemented in June 2003, does not address several long-standing external reporting issues and has created some new problems. Long-standing external financial reporting issues have not been addressed. NASA has not used its implementation of the core financial module as an opportunity to drive needed changes in its management practices and business processes. Therefore, the system does little to address NASA's ability to properly account for $37 billion of reported property or certain aspects of the agency's $15 billion annual budget. New financial reporting problems have emerged. NASA went forward with its aggressive implementation plans even though agency managers knew of problems with the module's ability to properly process and record certain transactions. As a result, the module does not appropriately capture critical information on the cost of NASA's operations, such as certain accrued costs, accounts payable, and obligation transactions. In April 2003, GAO reported that the core financial module did not address key internal management information requirements. Now, GAO has found that the module cannot reliably provide key financial data needed for external financial reporting. Although NASA intends to address many of these issues, its implementation approach raises concerns over its ability to do so. These deferred external reporting capabilities, combined with the findings from our April 2003 report, indicate that NASA's June 2003 core financial module and related systems do not substantially comply with the requirements of Federal Financial Management Improvement Act (FFMIA). FFMIA addresses the need for agencies' financial systems to provide value to those who use financial data. NASA must address these issues if the core financial module and IFMP are to achieve the objective of providing reliable, timely financial information for both internal management decision-making and external reporting purposes. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The FEHBP is the largest employer-sponsored health insurance program in the country. Through it, about 8 million federal employees, retirees, and their dependents received health coverage—including for prescription drugs—in 2008. Coverage is provided under competing plans offered by multiple private health insurers under contract with OPM, which administers the program, subject to applicable requirements. In 2009, 269 health plan options were offered by participating insurers, 10 of which were offered nationally while the remaining health plan options were offered in certain geographic regions. According to OPM, plans must cover all medically necessary prescription drugs approved by the Food and Drug Administration (FDA), but plans may maintain formularies that encourage the use of certain drugs over others. Enrollees may obtain prescriptions from retail pharmacies that contract with the plans or from mail-order pharmacies offered by the plans. In 2005, FEHBP prescription drug spending was an estimated $8.3 billion. Medicare—the federal health insurance program that serves about 45 million elderly and disabled individuals—offers an outpatient prescription drug benefit known as Medicare Part D. This benefit was established by the Medicare Prescription Drug, Improvement, and Modernization Act of 2003 (MMA) beginning January 1, 2006. As of February 2009, Part D provided federally subsidized prescription drug coverage for nearly 27 million beneficiaries. The Centers for Medicare & Medicaid Services (CMS), part of the Department of Health and Human Services (HHS), manages and oversees Part D. Medicare beneficiaries may choose a Part D plan from multiple competing plans offered nationally or in certain geographic areas by private sponsors, largely commercial insurers, under contract with CMS. Part D plan sponsors offer drug coverage either through stand-alone prescription drug plans for beneficiaries in traditional fee-for-service Medicare or through Medicare managed care plans, known as Medicare Advantage. In 2009, there were over 3,700 prescription drug plans offered. Under Medicare Part D, plans can design their own formularies, but each formulary must include drugs within each therapeutic category and class of covered Part D drugs. Enrollees may obtain prescriptions from retail pharmacies that contract with the plans or from mail-order pharmacies offered by the plans. Medicare Part D spending is estimated to be about $51 billion in 2009. The VA pharmacy benefit is provided to eligible veterans and certain others. As of 2006, about 8 million veterans were enrolled in the VA system. In general, medications must be prescribed by a VA provider, filled at a VA pharmacy, and listed on the VA national drug formulary, which comprises 570 categories of drugs. In addition to the VA national formulary, VA facilities can establish local formularies to cover additional drugs. VA may provide nonformulary drugs in cases of medical necessity. In 2006, VA spent an estimated $3.4 billion on prescription drugs. The DOD pharmacy benefit is provided to TRICARE beneficiaries, including active duty personnel, certain reservists, retired uniformed service members, and dependents. As of 2009, there were about 9.4 million eligible TRICARE beneficiaries. In addition to maintaining a formulary, DOD provides options for obtaining nonformulary drugs. Beneficiaries can obtain prescription drugs through a network of retail pharmacies, nonnetwork retail pharmacies, DOD military treatment facilities, and DOD’s TRICARE Mail-Order Pharmacy. In 2006, DOD spent $6.2 billion on prescription drugs. Medicaid, a joint federal-state program, finances medical services for certain low-income adults and children. In fiscal year 2008, approximately 63 million beneficiaries were enrolled in Medicaid. While some benefits are federally required, outpatient prescription drug coverage is an optional benefit that all states have elected to offer. Drug coverage depends on the manufacturer’s participation in the federal Medicaid drug rebate program, through which manufacturers pay rebates to state Medicaid programs for covered drugs used by Medicaid beneficiaries. Retail pharmacies distribute drugs to Medicaid beneficiaries and then receive reimbursements from states for the acquisition cost of the drug and a dispensing fee. Medicaid outpatient drug spending has decreased since 2006 because Medicare Part D replaced Medicaid as the primary source of drug coverage for low-income beneficiaries with coverage under both programs—referred to as dual eligible beneficiaries. In fiscal year 2008, Medicaid outpatient drug spending was $9.3 billion—including $5.5 billion as the federal share—which was calculated after adjusting for manufacturer rebates to states under the Medicaid drug rebate program. FEHBP uses competition among health plans as the primary measure to control prescription drug spending and other program costs. Under an annual “open season,” enrollees may remain enrolled in the same plan or select another competing plan based on benefits, services, premiums, and other such factors. Thus, plans have the incentive to try to retain or increase their market share by providing the benefits sought by enrollees along with competitive premiums. In turn, the larger a plan’s market share, the more leverage it has for obtaining favorable drug prices on behalf of its enrollees and controlling prescription drug spending. Similar to most private employer-sponsored or individually purchased health plans, most FEHBP plans contract with pharmacy benefit managers (PBMs) to help them administer the prescription drug benefit and control drug spending. In a 2003 report reviewing the use of PBMs by three plans representing about 55 percent of total FEHBP enrollment, we found that the PBMs used three key approaches to achieve savings for the health plans: negotiating rebates with drug manufacturers and passing some of the savings to the plans; obtaining drug price discounts from retail pharmacies and dispensing drugs at lower costs through mail-order pharmacies operated by the PBMs; and using other intervention techniques that reduce utilization of certain drugs or substitute other, less costly drugs. For example, under generic substitution PBMs substituted less expensive, chemically equivalent generic drugs for brand-name drugs; under therapeutic interchange PBMs encouraged the substitution of less expensive formulary brand-name drugs for more expensive nonformulary drugs within the same drug class; under prior authorization PBMs required enrollees to receive approval from the plan or PBM before dispensing certain drugs that are high cost or meet other criteria; and under drug utilization review PBMs examined prescriptions at the time of purchase or retrospectively to assess safety considerations and compliance with clinical guidelines, including appropriate quantity and dosage. The PBMs were compensated by retaining some of the negotiated savings. The PBMs also collected fees from the plans for administrative and clinical services, kept a portion of the payments from FEHBP plans for mail-order drugs in excess of the prices they paid manufacturers to acquire the drugs, and in some cases retained a share of the rebates that PBMs negotiated with drug manufacturers. While OPM does not play a role in negotiating prescription drug prices or discounts, it does attempt to limit prescription drug spending through its leverage with participating health plans in annual premium and benefit negotiations. Each year, OPM negotiates benefit and rate proposals with participating plans and announces key policy goals for the program, including those relating to spending control. For example, in preparation for benefit and rate negotiations for the 2007 plan year, OPM encouraged proposals from plans to continue to explore the appropriate substitution for higher cost drugs with lower cost therapeutic alternatives, such as generic drugs, and the use of tiered formularies or prescription drug lists. OPM also sought proposals from plans to pursue the advantages of specialty pharmacy programs aimed at reducing the high costs of infused and intravenously administered drugs. In preparation for 2010 benefit and rate negotiations, OPM reiterated its desire for proposals from plansto substitute lower cost for higher cost therapeutically equivalent drug s, adding emphasis to using evidence-based health outcome measures. Medicare Part D uses a competitive model similar to FEHBP, while other federal programs use other methods, such as statutorily mandated prices or direct negotiations with drug suppliers. Medicare Part D follows a model similar to the FEHBP by relying on competing prescription drug plans to control prescription drug spending. As with the FEHBP, during an annual open season Part D enrollees may remain enrolled in the same plan or select from among other competing plans based on benefit design, premiums, and other plan features. To attract enrollees, plans have the incentive to offer benefits that will meet beneficiaries’ prescription drug needs at competitive premiums. The larger a plan’s market share, the more leverage it has for obtaining favorable drug prices on behalf of its enrollees and controlling prescription drug spending. As a result, Part D plans vary in their monthly premiums, the annual deductibles, and cost sharing for drugs. Plans also differ in the drugs they cover on their formulary and the pharmacies they use. Part D uses competing sponsors to generate prescription drug savings for beneficiaries, in part through their ability to negotiate prices with drug manufacturers and pharmacies. To generate these savings, sponsors often contract with PBMs to negotiate rebates with drug manufacturers, discounts with retail pharmacies, and other price concessions on behalf of the sponsor. MMA specifically states that the Secretary of HHS may not interfere with negotiations between sponsors and drug manufacturers and pharmacies. Even though CMS is not involved in price negotiations, it attempts to determine whether beneficiaries are receiving the benefit of negotiated drug prices and price concessions when it calculates the final plan payments. Sponsors must report the price concession amounts to CMS and pass price concessions onto beneficiaries and the program through lower cost sharing, lower drug prices, or lower premiums. Similar to OPM, CMS also negotiates plan design with participating plans and announces key policy goals for the program, including those relating to spending control. For example, in preparation for 2010 benefit and rate negotiations, CMS noted that one of its goals is to establish a more transparent process so that beneficiaries will be able to better predict their out-of-pocket costs. Part D sponsors or their PBMs also use other methods to help contain drug spending similar to FEHBP plans. For example, most plans assign covered drugs to distinct tiers, each of which carries a different level of cost sharing. A plan may establish separate tiers for generic drugs and brand-name drugs—with the generic drug tier requiring a lower level of cost sharing than the brand-name drug tier. Plans may also require utilization management for certain drugs on their formulary. Common utilization management practices include requiring physicians to obtain authorization from the plan prior to prescribing a drug; step therapy, which requires beneficiaries to first try a less costly drug to treat their condition; and imposing quantity limits for dispensed drugs. Additionally, all Part D plans must meet requirements with respect to the extent of their pharmacy networks and the categories of drugs they must cover. Plan formularies generally must cover at least two Part D drugs in each therapeutic category and class, except when there is only one drug in the category or class or when CMS has allowed the plan to cover only one drug. CMS has also designated six categories of drugs of clinical concern for which plans must cover all or substantially all of the drugs. While FEHBP and Medicare Part D use competition between health plans to control prescription drug spending, VA and DOD rely on statutorily mandated prices and discounts and further negotiations with drug suppliers to obtain lower prices for drugs covered on their formularies. VA and DOD have access to a number of prices to consider when purchasing drugs, paying the lowest available. Federal Supply Schedule (FSS) prices. VA’s National Acquisition Center negotiates FSS prices with drug manufacturers, and these prices are available to all direct federal purchasers. FSS prices are intended to be no more than the prices manufacturers charge their most-favored nonfederal customers under comparable terms and conditions. Under federal law, drug manufacturers must list their brand-name drugs on the FSS to receive reimbursement for drugs covered by Medicaid. All FSS prices include a fee of 0.5 percent of the price to fund VA’s National Acquisition Center. Blanket purchase agreements and other national contracts. B purchase agreements and other national contracts with drug manufacturers allow VA and DOD—either separately or jointly—to negotiate prices below FSS prices. The lower prices may depend on the volume of specific drugs being purchased by particular facilities, such as VA or military hospitals, or on being ass OD’s respective national formularies. D igned preferred status on VA’s and In a few cases, individual VA and DOD medical centers have obtained lower prices through local agreements with suppliers than they could through the national contracts, FSS prices, or federal ceiling prices. In addition, VA’s and DOD’s use of formularies, pharmacies, and prime vendors can further affect drug prices and help control drug spending. Both VA and DOD use their own national, standard formulary to obtain more competitive prices from manufacturers that have their drugs listed on the formulary. VA and DOD formularies also encourage the substitution of lower cost drugs determined to be as or more effective than hig drugs. VA and DOD use prime vendors, which are preferred drug distributors, to purchase drugs from manufacturers and deliver the drugs to VA or DOD facilities. VA and DOD receive discounts from their prime vendors that also reduce the prices that they pay for drugs. For DOD, the discounts vary among prime vendors and the areas they serve. As of June 2004, VA’s prime vendor discount was 5 percent, while DOD’s discounts averaged about 2.9 percent within the United States. Additionally, si to FEHBP and Medicare Part D, DOD uses utilization management methods to limit drug spending including prior authorization, dispensin limitations, and higher cost sharing for nonformulary drugs and drugs dispensed at retail pharmacies. Unlike VA and DOD, Medicaid programs do not negotiate drug prices with il manufacturers to control prescription drug spending, but reimburse reta pharmacies for drugs dispensed to beneficiaries at set prices. CMS sets aggregate payment limits—known as the federal upper limit (FUL)—for certain outpatient multiple-source prescription drugs. CMS also provides guidelines regarding drug payment. States are to pay pharmacies the lower of the state’s estimate of the drug’s acquisition cost to the pharmacy, pl a dispensing fee, or the pharmacy’s usual and customary charge to the general public; for certain d costs may apply if lower. rugs the FUL or the state maximum allowable In addition to these retail pharmacy reimbursements, Medicaid programs also control prescription drug spending through the Medicaid drug rebate program. Under the drug rebate program, drug manufacturers are required to provide quarterly rebates for covered outpatient prescription drugs purchased by state Medicaid programs. Under the rebate program, states take advantage of the prices manufacturers receive for drugs in the commercial market that reflect the results of negotiations by private payers such as discounts and rebates. For brand-name drugs, the rebates are based on two price benchmarks per drug that manufacturers report to CMS: best price and average manufacturer price (AMP). The relationship between best price and AMP determines the unit rebate amount and thus the overall size of the rebate that states receive. The basic unit rebate amount is the greater of two values: the difference between best price and AMP or 15.1 percent of AMP. If the brand-name drug’s AMP rises faster than inflation as measured by the change in the consumer price index, the manufacturer is required to provide an additional rebate to the state Medicaid program. In addition to brand-name drugs, states also receive rebates for generic drugs. For generic drugs, the basic unit rebate amount is 11 percent of the AMP. A state’s rebate for a drug is the product of the unit rebate amount plus any applicable additional rebate amount and the number of units of the drug paid for by the state’s Medicaid program. In addition to the rebates mandated under the drug rebate program, states can also negotiate additional rebates with manufacturers. Like FEHBP and Medicare Part D participating plans, Medicaid programs also use other utilization management methods to control prescription drug spending including prior authorization and utilization review programs, dispensing limitations, and cost-sharing requirements. Mr. Chairman, this concludes my prepared remarks. I would be happy to answer any questions that you or other members of the Subcommittee may have. For future contacts regarding this testimony, please contact John E. Dicken at (202) 512-7114 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Randy DiRosa, Assistant Director; Rashmi Agarwal; William A. Crafton; Martha Kelly; and Timothy Walker made key contributions to this statement. Federal Employees Health Benefits Program: Enrollee Cost Sharing for Selected Specialty Prescription Drugs. GAO-09-517R. Washington, D.C.: April 30, 2009. Medicare Part D Prescription Drug Coverage: Federal Oversight of Reported Price Concessions Data. GAO-08-1074R. Washington, D.C.: September 30, 2008. DOD Pharmacy Program: Continued Efforts Needed to Reduce Growth in Spending at Retail Pharmacies. GAO-08-327. Washington, D.C.: April 4, 2008. DOD Pharmacy Benefits Program: Reduced Pharmacy Costs Resulting from the Uniform Formulary and Manufacturer Rebates. GAO-08-172R. Washington, D.C.: October 31, 2007. Military Health Care: TRICARE Cost-Sharing Proposals Would Help Offset Increasing Health Care Spending, but Projected Savings Are Likely Overestimated. GAO-07-647. Washington, D.C.: May 31, 2007. Federal Employees Health Benefits Program: Premiums Continue to Rise, but Rate of Growth Has Recently Slowed. GAO-07-873T. Washington, D.C.: May 18, 2007. Prescription Drugs: Oversight of Drug Pricing in Federal Programs. GAO-07-481T. Washington, D.C.: February 9, 2007. Prescription Drugs: An Overview of Approaches to Negotiate Drug Prices Used by Other Countries and U.S. Private Payers and Federal Programs. GAO-07-358T. Washington, D.C.: January 11, 2007. Medicaid Outpatient Prescription Drugs: Estimated 2007 Federal Upper Limits for Reimbursement Compared with Retail Pharmacy Acquisition Costs. GAO-07-239R. Washington, D.C.: December 22, 2006. Federal Employees Health Benefits Program: Premium Growth Has Recently Slowed, and Varies among Participating Plans. GAO-07-141. Washington, D.C.: December 22, 2006. Medicaid: States’ Payments for Outpatient Prescription Drugs. GAO-06-69R. Washington, D.C.: October 31, 2005. | Millions of individuals receive prescription drugs through federal programs. The increasing cost of prescription drugs has put pressure to control drug spending on federal programs such as the Federal Employees Health Benefits Program (FEHBP), Medicare Part D, the Department of Veterans Affairs (VA), the Department of Defense (DOD), and Medicaid. Prescription drug spending within the FEHBP in particular, which provides health and drug coverage to about 8 million federal employees, retirees, and their dependents, has been a significant contributor to FEHBP cost and premium growth. The Office of Personnel Management (OPM), which administers the FEHBP, predicted that prescription drugs would continue to be a primary driver of program costs in 2009. GAO was asked to describe approaches used by the FEHBP to control prescription drug spending and summarize approaches used by other federal programs. This testimony is based on prior GAO work, including Prescription Drugs: Oversight of Drug Pricing in Federal Programs (GAO-07-481T) and Prescription Drugs: An Overview of Approaches to Negotiate Drug Prices Used by Other Countries and U.S. Private Payers and Federal Programs (GAO-07-358T) and selected updates from relevant literature on drug spending controls prepared by other congressional and federal agencies. FEHBP uses competition among health plans to control prescription drug spending, giving plans an incentive to rein in costs and leverage their market share to obtain favorable drug prices. Most FEHBP plans contract with pharmacy benefit managers (PBMs) to help administer the prescription drug benefit. In a 2003 report, GAO found that the PBMs reduced drug spending by: negotiating rebates with drug manufacturers and passing some of the savings to the plans; obtaining drug price discounts from retail pharmacies and dispensing drugs at lower costs through mail-order pharmacies operated by the PBMs; and using other techniques that reduce utilization of certain drugs or substitute other, less costly drugs. While OPM does not negotiate drug prices or discounts for FEHBP, it attempts to limit spending through annual premium and benefit negotiations with plans, including the encouragement of spending controls such as generic substitution. Other federal programs use a range of approaches to control prescription drug spending. (1) Medicare--the federal health insurance program for the elderly and disabled--offers an outpatient prescription drug benefit known as Medicare Part D that uses competition between plan sponsors and their PBMs to limit drug spending, in part through the ability to negotiate prices and price concessions with drug manufacturers and pharmacies. Plans are required to report these negotiated price concessions to the Centers for Medicare & Medicaid Services (CMS), to help CMS determine the extent to which they are passed on to beneficiaries. (2) VA and DOD pharmacy benefit programs for veterans, active duty military personnel, and others may use statutorily mandated discounts as well as negotiations with drug suppliers to limit drug spending. VA and DOD have access to a number of prices to consider when purchasing drugs--including the Federal Supply Schedule prices that VA negotiates with drug manufacturers--paying the lowest of all available prices. (3) The Medicaid program for low-income adults and children is subject to aggregate payment limits and drug payment guidelines set by CMS. Medicaid does not negotiate drug prices with manufacturers, but reimburses retail pharmacies for drugs dispensed to beneficiaries at set prices. An important element of controlling Medicaid drug spending is the Medicaid drug rebate program, under which drug manufacturers are required by law to provide rebates for certain drugs covered by Medicaid. Under the rebate program, states take advantage of prices manufacturers receive for drugs in the commercial market that reflect discounts and rebates negotiated by private payers. In addition, Part D, VA and DOD, and Medicaid use techniques similar to FEHBP to limit drug spending, such as generic substitution, prior authorization, utilization review programs, or cost-sharing requirements. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Passenger screening is a process by which personnel authorized by TSA inspect individuals and property to deter and prevent the carriage of any unauthorized explosive, incendiary, weapon, or other dangerous item onboard an aircraft or into a sterile area. Passenger screening personnel must inspect individuals for prohibited items at designated screening locations. As shown in figure 1, the four passenger screening functions are X-ray screening of property, walk-through metal detector screening of individuals, hand-wand or pat-down screening of individuals, and physical search of property and trace detection for explosives. Typically, passengers are only subjected to X-ray screening of their carry- on items and screening by the walk-through metal detector. Passengers whose carry-on baggage alarms the X-ray machine, who alarm the walk- through metal detector, or who are designated as selectees—that is, passengers selected by the Computer-Assisted Passenger Prescreening System (CAPPS) or other TSA-approved processes to receive additional screening—are screened by hand-wand or pat-down and have their carry- on items screened for explosives traces or physically searched. The passenger checkpoint screening system is composed of three elements: the people responsible for conducting the screening of airline passengers and their carry-on items—TSOs, the technology used during the screening process, and the procedures TSOs are to follow to conduct screening. Collectively, these elements help to determine the effectiveness and efficiency of passenger checkpoint screening. TSOs screen all passengers and their carry-on baggage prior to allowing passengers access to their departure gates. There are several positions within TSA that perform and directly supervise passenger screening functions. Figure 2 provides a description of these positions. In May 2005, we reported on TSA’s efforts to train TSOs and to measure and enhance TSO performance. We found that TSA had initiated a number of actions designed to enhance passenger TSO, checked baggage TSO, and supervisory TSO training. However, at some airports TSOs encountered difficulty accessing and completing recurrent (refresher) training because of technological and staffing constraints. We also found that TSA lacked adequate internal controls to provide reasonable assurance that TSOs were receiving legislatively mandated basic and remedial training, and to monitor the status of its recurrent training program. Further, we reported that TSA had implemented and strengthened efforts to collect TSO performance data as part of its overall effort to enhance TSO performance. We recommended that TSA develop a plan for completing the deployment of high-speed Internet/intranet connectivity to all TSA airport training facilities, and establish appropriate responsibilities and other internal controls for monitoring and documenting TSO compliance with training requirements. DHS generally concurred with our recommendations and stated that TSA has taken steps to implement them. There are typically four types of technology used to screen airline passengers and their carry-on baggage at the checkpoint: walk-through metal detectors, X-ray machines, hand-held metal detectors, and explosive trace detection (ETD) equipment. The President’s fiscal year 2007 budget request noted that emerging checkpoint technology will enhance the detection of prohibited items, especially firearms and explosives, on passengers. As of December 2006, TSA plans to conduct operational tests of three types of passenger screening technologies within the next year. TSA has conducted other tests in the past; for example, during fiscal year 2005, TSA operationally tested document scanners, which use explosive trace detection technology to detect explosives residue on passengers’ boarding passes or identification cards. TSA decided not to expand the use of the document scanner, in part because of the extent to which explosives traces had to be sampled manually. TSA also plans to begin operational tests of technology that would screen bottles for liquid explosives. We are currently evaluating the Department of Homeland Security’s and TSA’s progress in planning for, managing, and deploying research and development programs in support of airport checkpoint screening operations. We expect to report our results in August 2007. TSA has developed checkpoint screening standard operating procedures, which are the focus of this report, that establish the process and standards by which TSOs are to screen passengers and their carry-on items at screening checkpoints. Between April 2005 and December 2005, based on available documentation, TSA deliberated 189 proposed changes to passenger checkpoint screening SOPs, 92 of which were intended to modify the way in which passengers and their carry-on items are screened. TSA issued six versions of the passenger checkpoint screening SOPs during this period. TSA modified passenger checkpoint screening SOPs to enhance the traveling public’s perception of the screening process, improve the efficiency of the screening process, and enhance the detection of prohibited items and suspicious persons. As shown in table 1, 48 of the 92 proposed modifications to passenger checkpoint screening SOPs were implemented, and the types of modifications made or proposed generally fell into one of three categories—customer satisfaction, screening efficiency, and security. TSA used various processes between April 2005 and December 2005 to modify passenger checkpoint screening SOPs, and a variety of factors guided TSA’s decisions to modify SOPs. TSA’s processes for modifying SOPs generally involved TSA staff recommending proposed modifications, reviewing and commenting on proposed modifications, and TSA senior leadership making final decisions as to whether proposed modifications should be implemented. During our 9-month review period, TSA officials considered 92 proposed modifications to the way in which passengers and their carry-on items were screened, and 48 were implemented. TSA officials proposed SOP modifications based on risk factors (threat and vulnerability information), day-to-day experiences of airport staff, and concerns and complaints raised by passengers. TSA then made efforts to balance security, efficiency, and customer service when deciding which proposed SOP modifications to implement. Consistent with our prior work that has shown the importance of data collection and analyses to support agency decision making, TSA conducted data collection and analysis for certain proposed SOP modifications that were tested before they were implemented at all airports. Nevertheless, we found that TSA could improve its data collection and analysis to assist the agency in determining whether the proposed procedures would enhance detection or free up TSO resources, when intended. In addition, TSA did not maintain complete documentation of proposed SOP modifications; therefore, we could not fully assess the basis for proposed SOP modifications or the reasons why certain proposed modifications were not implemented. TSA officials acknowledged that it is beneficial to maintain documentation on the reasoning behind decisions to implement or reject SOP modifications deemed significant. Proposed SOP modifications were submitted and reviewed under two processes during our 9-month review period, and for each process, TSA senior leadership made the final decision as to whether the proposed modifications would be implemented. One of the processes TSA used to modify passenger checkpoint screening SOPs involved TSA field staff or headquarters officials, and, to a lesser extent, TSA senior leadership, suggesting ways in which passenger checkpoint screening SOPs could be modified. These suggestions were submitted through various mechanisms, including electronic mail and an SOP panel review conducted by TSA airport personnel. (These methods are described in more detail in app. II.) Eighty-two of the 92 proposed modifications were considered under this process. If TSA officials determined, based on their professional judgment, that the recommended SOP modifications—whether from headquarters or the field—merited further consideration, or if a specific modification was proposed by TSA senior leadership, the following chain of events occurred: First, the procedures branch of the Office of Security Operations drafted SOP language for each of the proposed modifications. Second, the draft language for each proposed modification was disseminated to representatives of various TSA divisions for review, and the language was revised as needed. Third, TSA officials tested proposed modifications in the airport operating environment if they found it necessary to: assess the security impact of the proposed modification, evaluate the impact of the modification on the amount of time taken for passengers to clear the checkpoint, measure the impact of the proposed modification on passengers and industry partners, or determine training needs created by the proposed modification. Fourth, the revised SOP language for proposed modifications was sent to the heads of several TSA divisions for comment. Fifth, considering the comments of the TSA division heads, the head of the Office of Security Operations or other TSA senior leadership made the final decision as to whether proposed modifications would be implemented. Another process for modifying passenger checkpoint screening SOPs during our 9-month review period was carried out by TSA’s Explosives Detection Improvement Task Force. The task force was established in October 2005 by the TSA Assistant Secretary to respond to the threat of improvised explosive devices (IED) being carried through the checkpoint. The goal of the task force was to apply a risk-based approach to screening passengers and their baggage in order to enhance TSA’s ability to detect IEDs. The task force developed 13 of the 92 proposed SOP modifications that were considered by TSA between April 2005 and December 2005. The task force solicited and incorporated feedback from representatives of various TSA divisions on these proposed modifications and presented them to TSA senior leadership for review and approval. TSA senior leadership decided that 8 of the 13 proposed modifications should be operationally tested—that is, temporarily implemented in the airport environment for the purposes of data collection and evaluation—to better inform decisions regarding whether the proposed modifications should be implemented. Following the testing of these proposed modifications in the airport environment, TSA senior leadership decided to implement 7 of the 8 operationally tested changes. (The task force’s approach to testing these procedures is discussed in more detail below.) Following our 9-month period of review, the changes that TSA made to its passenger checkpoint screening SOPs in response to the alleged August 2006 liquid explosives terror plot were decided upon by DHS and TSA senior leadership, with some input from TSA field staff, aviation industry representatives, and officials from other federal agencies. Based on available documentation, risk factors (i.e., threats to commercial aviation and vulnerability to those threats), day-to-day experiences of airport staff, and complaints and concerns raised by passengers were the basis for TSA staff and officials proposing modifications to passenger checkpoint screening SOPs. Fourteen of the 92 procedure modifications recommended by TSA staff and officials were based on reported or perceived threats to commercial aviation, and existing vulnerabilities to those threats. For example, the Explosives Detection Improvement Task Force proposed SOP modifications based on threat reports developed by TSA’s Intelligence and Analysis division. Specifically, in an August 2005 civil aviation threat assessment, the division reported that terrorists are likely to seek novel ways to evade U.S. airport security screening. Subsequently, the task force proposed that the pat-down procedure performed on passengers selected for additional screening be revised to include not only the torso area, which is what the previous pat-down procedure entailed, but additional areas of the body such as the legs. The August 2005 threat assessment also stated that terrorists may attempt to carry separate components of an IED through the checkpoint, then assemble the components while onboard the aircraft. To address this threat, the task force proposed a new procedure to enhance TSOs’ ability to search for components of improvised explosive devices. According to TSA officials, threat reports have also indicated that terrorists rely on the routine nature of security measures in order to plan their attacks. To address this threat, the task force proposed a procedure that incorporated unpredictability into the screening process by requiring designated TSOs to randomly select passengers to receive additional search procedures. Following our 9-month review period, TSA continued to use threat information as the basis for proposed modifications to passenger checkpoint screening SOPs. In August 2006, TSA proposed modifications to passenger checkpoint screening SOPs after receiving threat information regarding an alleged terrorist plot to detonate liquid explosives onboard multiple aircraft en route from the United Kingdom to the United States. Regarding vulnerabilities to reported threats, based on the results of TSA’s own covert tests (undercover, unannounced tests), TSA’s Office of Inspection recommended SOP modifications to enhance the detection of explosives at the passenger screening checkpoint. TSA officials also proposed modifications to passenger checkpoint screening SOPs based on their professional judgment regarding perceived threats to aviation security. For example, an FSD recommended changes to the screening of funeral urns based on a perceived threat. In some cases, proposed SOP modifications appeared to reflect threat information analyzed by TSA officials. For example, TSOs are provided with Threat in the Spotlight, a weekly report that identifies new threats to commercial aviation, examples of innovative ways in which passengers may conceal prohibited items, and pictures of items that may not appear to be prohibited items but actually are. TSOs are also provided relevant threat information during briefings that take place before and after their shifts. In addition, FSDs are provided classified intelligence summaries on a daily and weekly basis, as well as monthly reports of suspicious incidents that occurred at airports nationwide. TSA’s consideration of threat and vulnerability—through analysis of current documentation and by exercising professional judgment—is consistent with a risk-based decision-making approach. As we have reported previously, and DHS and TSA have advocated, a risk-based approach, as applied in the homeland security context, can help to more effectively and efficiently prepare defenses against acts of terrorism and other threats. TSA headquarters and field staff also based proposed SOP modifications— specifically, 36 of the 92 proposed modifications—on experience in the airport environment. For example, TSA headquarters officials conduct reviews at airports to identify best practices and deficiencies in the checkpoint screening process. During one of these reviews, headquarters officials observed that TSOs were not fully complying with the pat-down procedure. After discussions with TSOs, TSA headquarters officials determined that the way in which TSOs were conducting the procedure was more effective. In addition, TSA senior leadership, after learning that small airports had staffing challenges that precluded them from ensuring that passengers are patted down by TSOs of the same gender, proposed that opposite-gender pat-down screening be allowed at small airports. Passenger complaints and concerns shared with TSA also served as a basis for proposed modifications during our 9-month review period. Specifically, of the 92 proposed SOP modifications considered during this period, TSA staff and officials recommended 29 modifications based on complaints and concerns raised by passengers. For example, TSA headquarters staff recommended allowing passengers to hold their hair while being screened by the Explosives Trace Portal, after receiving complaints from passengers about eye injuries from hair blowing in their eyes and hair being caught in the doors of the portal. When deciding whether to implement proposed SOP modifications, TSA officials also made efforts to balance the impact of proposed modifications on security, efficiency, and customer service. TSA’s consideration of these factors reflects the agency’s mission to protect transportation systems while also ensuring the free movement of people and commerce. As previously discussed, TSA sought to improve the security of the commercial aviation system by modifying the SOP for conducting the pat-down search. (TSA identified the modified pat-down procedure as the “bulk-item” pat-down.) When deciding whether to implement the proposed modification, TSA officials considered not only the impact that the bulk- item pat-down procedure would have on security, but also the impact that the procedure would have on screening efficiency and customer service. For example, TSA officials determined that the bulk-item pat-down procedure would not significantly affect efficiency because it would only add a few seconds to the screening process. Following our 9-month review period, TSA continued to make efforts to balance security, efficiency, and customer service when deciding whether to implement proposed SOP modifications, as illustrated by TSA senior leadership’s deliberation on proposed SOP modifications in response to the alleged August 2006 liquid explosives terrorist plot. TSA modified the passenger checkpoint screening SOP four times between August 2006 and November 2006 in an effort to defend against the threat of terrorists’ use of liquid explosives onboard commercial aircraft. While the basis for these modifications was to mitigate risk, as shown in table 2, TSA senior leadership considered several other factors when deciding whether to implement the modifications. As TSA senior leadership obtained more information about the particular threat posed by the liquid explosives through tests conducted by DHS’s Science and Technology Directorate and FBI, TSA relaxed the restrictions to allow passengers to carry liquids, gels, and aerosols onboard aircraft in 3-fluid-ounce bottles—and as of November 2006, 3.4-fluid-ounce bottles— that would easily fit in a quart-sized, clear plastic, zip-top bag. TSA senior leadership identified both benefits and drawbacks to this SOP modification, but determined that the balance of security, efficiency, and customer service that would result from these SOP changes was appropriate. As shown in table 2, TSA officials recognize that there are security drawbacks—or vulnerabilities—associated with allowing passengers to carry even small amounts of liquids and gels onboard aircraft. For example, two or more terrorists could combine small amounts of liquid explosives after they pass through the checkpoint to generate an amount large enough to possibly cause catastrophic damage to an aircraft. However, TSA officials stated that doing so would be logistically challenging given the physical harm that the specific explosives could cause to the person handling them, and that suspicion among travelers, law enforcement officials, and airport employees would likely be raised if an individual was seen combining the liquid contents of small containers stored in two or more quart-sized plastic bags. TSA officials stated that at the time of the modifications to the liquid, gels, and aerosols screening procedures, there was consensus among explosives detection experts, both domestically and abroad, regarding TSA’s assumptions about how the explosives could be used and the damage they could cause to an aircraft. TSA officials also stated that after reviewing the intelligence information related to the alleged August 2006 London terror plot— particularly with regard to the capability and intent of the terrorists—TSA determined that allowing small amounts of liquids, gels, and aerosols onboard aircraft posed an acceptable level of risk to the commercial aviation system. Moreover, TSA officials acknowledged that there are vulnerabilities with allowing passengers to carry liquids that are exempted from the 3.4-fluid-ounce limit—such as baby formula and medication— onboard aircraft. TSA officials stated that the enhancements TSA is making to the various other layers of aviation security will help address the security vulnerabilities identified above. For example, TSA has increased explosives detection canine patrols, deployed Federal Air Marshals on additional international flights, increased random screening of passengers at boarding gates, and increased random screening of airport and TSA employees who pass through the checkpoint. TSA also plans to expand implementation of its Screening Passengers by Observation Technique (SPOT) to additional airports. SPOT involves specially trained TSOs observing the behavior of passengers and resolving any suspicious behavior through casual conversation with passengers and referring suspicious passengers to selectee screening. TSA intends for SPOT to provide a flexible, adaptable, risk-based layer of security that can be deployed to detect potentially high-risk passengers based on certain behavioral cues. While professional judgment regarding risk factors, experience in the operating environment, and customer feedback have guided many of the decisions TSA leadership made about which screening procedures to implement, TSA also sought to use empirical data as a basis for evaluating the impact some screening changes could have on security and TSO resources. The TSA Assistant Secretary stated in December 2005 that TSA sought to make decisions about screening changes based on data and metrics—a practice he said TSA would continue. The use of data and metrics to inform TSA’s decision making regarding implementing proposed screening procedures is consistent with our prior work that has shown the importance of data collection and analyses to support agency decision making. Between October 2005 and January 2006, TSA’s Explosives Detection Improvement Task Force sought to collect data as part of an effort to test the impact of seven proposed procedures at selected airports, as noted earlier. These seven proposed procedures were selected because officials believed they would have a significant impact on how TSOs perform daily screening functions, TSO training, and customer acceptability. According to TSA’s chief of security operations, the purpose of testing these procedures in the airport environment was to ensure that TSA was “on the right path” in implementing them. These particular procedures were considered by senior TSA officials as especially important for enhancing the detection of explosives and for deterring terrorists from attempting to carry out an attack. According to TSA, some of the proposed procedures could also free up TSOs so that they could spend more time on procedures for detecting explosives and less time on procedures associated with low security risks, such as identifying small scissors in carry-on bags. The seven proposed procedures tested by the task force reflect both new procedures and modifications to existing procedures, as shown in table 3. Our analysis of TSA’s data collection and data analysis for the seven procedures that were operationally tested identified several problems that affected TSA’s ability to determine whether these procedures, as designed and implemented by TSA, would have the intended effect—to enhance the detection of explosives during the passenger screening process or to free up resources so that explosives detection procedures could be implemented. Although the deterrence of persons intending to do harm is also an intended effect of some proposed SOP modifications, TSA officials said that it is difficult to assess the extent to which implementation of proposed procedures would deter terrorists. The Office of Management and Budget has also acknowledged the difficulty in measuring deterrence, particularly for procedures intended to prevent acts of terrorism. While we agree that measuring deterrence is difficult, opportunities exist for TSA to strengthen its analyses to help provide information on whether the proposed procedures would enhance detection or free up TSO resources, when intended. Screening Passengers by Observation Technique. TSA officials stated that SPOT is intended to both deter terrorists and identify suspicious persons who intend to cause harm while on an aircraft. While we recognize that it is difficult to assess the extent to which terrorists are deterred by the presence of designated TSOs conducting behavioral observations at the checkpoint, we believe that there is an opportunity to assess whether SPOT contributes to enhancing TSA’s ability to detect suspicious persons that may intend to cause harm on an aircraft. One factor that may serve as an indicator that a person intends to do harm on an aircraft is whether that individual is carrying a prohibited item. TSA collected and assessed data at 14 airports for various time periods on the number of prohibited items found on passengers who were targeted under SPOT and referred to secondary screening or law enforcement officials. However, these data collection efforts, alone, did not enable TSA to determine whether the detection of prohibited items would be enhanced if SPOT were implemented because TSA had no means of comparing whether persons targeted by SPOT were more likely to carry prohibited items than persons not targeted by SPOT. To obtain this information, the task force would have had to collect data on the number of passengers not targeted by SPOT that had prohibited items on them. This information could be used to determine whether a greater percentage of passengers targeted under SPOT are found to have prohibited items than those passengers who are not targeted by SPOT, which could serve as one indicator of the extent to which SPOT would contribute to the detection of passengers intending to cause harm on an aircraft. Although it has not yet done so, it may be possible for TSA to evaluate the impact of SPOT on identifying passengers carrying prohibited items. There is precedent in other federal agencies for evaluating the security benefits of similar procedures. For instance, U.S. Customs and Border Protection (CBP) within DHS developed the Compliance Examination (COMPEX) system to evaluate the effectiveness of its procedures for selecting international airline passengers for secondary screening. Specifically, COMPEX compares the percentage of targeted passengers on which prohibited items are found to the percentage of randomly selected passengers on which prohibited items are found. The premise is that targeting is considered to be effective if a greater percentage of targeted passengers are found to possess prohibited items than the percentage of randomly selected passengers, and the difference between the two percentages is statistically significant. CBP officials told us in May 2006 that they continue to use COMPEX to assess the effectiveness of their targeting of international airline passengers. When asked about using a method such as COMPEX to assess SPOT, TSA officials stated that CBP and TSA are seeking to identify different types of threats through their targeting programs. CBP, through its targeting efforts, is attempting to identify passengers with contraband and unauthorized aliens, whereas TSA, through SPOT, is attempting to identify potential high-risk passengers. Additionally, in commenting on a draft of this report, DHS stated that, according to TSA, the possession of a prohibited item is not a good measure of SPOT effectiveness because an individual may not intend to use a prohibited item to cause harm or hijack an aircraft. While it may be possible for a terrorist to cause harm or hijack an aircraft without using a prohibited item, as in the case of the September 11 terrorist attacks, other terrorist incidents and threat information identify that terrorists who carried out or planned to carry out an attack on a commercial aircraft intended to do so by using prohibited items, including explosives and weapons. Therefore, we continue to believe that comparing the percentage of individuals targeted and not targeted under SPOT on which prohibited items are found could be one of several potential indicators of the effectiveness of SPOT. Such a measure may be most useful with regard to the prohibited items that could be used to bring down or hijack an aircraft. TSA officials stated that the agency agrees in principle that measuring SPOT effectiveness, if possible, may provide valuable insights. Unpredictable Screening Process, Bulk-Item Pat-Down Search, and IED Component Search. We found that the task force also could have strengthened its efforts to evaluate the security impact of other proposed procedures—specifically, USP, the bulk-item pat-down search, and the IED component search. For all three of these procedures, the task force did not collect any data during the operational testing that would help determine whether they would enhance detection capability. TSA officials told us that they did not collect these data because they had a limited amount of time to test the procedures because they had to make SOP modifications quickly as part of the agency’s efforts to focus on higher threats, such as explosives, and the TSA Assistant Secretary’s goal of implementing the SOP modifications before the 2005 Thanksgiving holiday travel season. Nevertheless, TSA officials acknowledged the importance of evaluating whether proposed screening procedures, including USP and the bulk-item pat-down, would enhance detection capability. TSA officials stated that covert testing has been used to assess TSOs’ ability to detect prohibited items, but covert testing was not implemented during operational testing of proposed procedures. Office of Inspection officials questioned whether covert testing could be used to test, exclusively, the security benefit of proposed procedures, because TSO proficiency and the capability of screening technology also factor into whether threat objects are detected during covert tests. Four of the five aviation security experts we interviewed acknowledged this limitation but stated that covert testing is the best way to assess the effectiveness of passenger checkpoint screening. In commenting on a draft of this report, DHS stated that, according to TSA, USP is intended to disrupt terrorists’ planning of an attack by introducing unpredictability into the passenger checkpoint screening process, and tools such as covert testing could not be used to measure the effectiveness of USP to this end. While we agree that covert testing may not be a useful tool to assess the impact USP has on disrupting terrorists’ plans and deterring terrorists from attempting to carry out an attack, we continue to believe that covert testing could have been used to assess whether USP would have helped to enhance detection capability during the passenger screening process, which TSA officials stated was another intended result of USP. Although TSA did not collect data on the security impact of the USP and bulk-item pat-down procedures, the task force did collect data on the impact these procedures had on screening efficiency—the time required to perform procedures—and on the reaction of TSOs, FSDs, and passengers to the proposed procedures. These data indicated that the USP procedure took less time, on average, for TSOs to conduct than the procedure it replaced (the random continuous selectee screening process); the revised pat-down procedure took TSOs about 25 seconds to conduct; and that passengers generally did not complain about the way in which both procedures were conducted. With respect to operational testing of the IED component search procedure, TSA was unable to collect any data during the testing period because no IEDs were detected by TSOs at the airports where the testing took place. As with the USP and bulk-item pat-down procedures, TSA could have conducted covert tests during the operational testing period to gather simulated data for the IED search procedure, in the absence of actual data. Selectee Screening Changes and Threat Area Search. Recognizing that some of the proposed procedures intended to enhance detection would require additional TSO resources, TSA implemented several measures aimed collectively at freeing up TSOs’ time so that they could focus on conducting more procedures associated with higher threats— identifying explosives and suspicious persons. For example, TSA modified the selectee screening procedure and the procedure for searching carry-on items—the threat area search—in order to reduce screening time. During an informal pilot of these proposed procedures at 3 airports in November 2005, TSA determined that the proposed selectee screening procedure would reduce search time of each selectee passenger, on average, by about 1.17 minutes at these airports. TSA also determined through this study that the proposed threat area search, on average, took 1.83 minutes to conduct at the participating airports, as compared to the existing target object search that took, on average, 1.89 minutes, and the existing whole bag search that took, on average, 2.37 minutes. Prohibited Items List Changes. Another measure TSA implemented to free up TSO resources to focus on higher threats involved changes to the list of items prohibited onboard aircraft. According to TSA, TSOs were spending a disproportionate amount of TSA’s limited screening resources searching for small scissors and small tools, even though, based on threat information and TSA officials’ professional judgment, such items no longer posed a significant security risk given the multiple layers of aviation security. TSA officials surmised that by not having to spend time and resources physically searching passengers’ bags for low-threat items, such as small scissors and tools, TSOs could focus their efforts on implementing more effective and robust screening procedures that can be targeted at screening for explosives. To test its assumption that a disproportionate amount of TSO resources was being spent searching for small scissors and tools, TSA collected information from several sources. First, TSA reviewed data maintained in TSA’s Performance Management Information System (PMIS), which showed that during the third and fourth quarters of fiscal year 2005 (a 6-month period), TSOs confiscated a total of about 1.8 million sharp objects other than knives or box cutters. These sharp objects constituted 19 percent of all prohibited items confiscated at the checkpoint. Second, based on information provided by FSDs, TSOs, and other screening experts, TSA determined that scissors constituted a large majority of the total number of sharp objects found at passenger screening checkpoints. Third, TSA headquarters officials searched through confiscated items bins at 4 airports and found that most of the scissors that were confiscated had blades less than 4 inches in length. Based on these collective efforts, TSA concluded that a significant number of items found at the checkpoint were low-threat, easily identified items, such as small scissors and tools, and that a disproportionate amount of time was spent searching for these items—time that could have been spent searching for high-threat items, such as explosives. TSA also concluded that because TSOs can generally easily identify scissors, if small scissors were no longer on the prohibited items list, TSOs could avoid conducting time-consuming physical bag searches to locate and remove these items. While we commend TSA’s efforts to supplement professional judgment with data and metrics in its decision to modify passenger checkpoint screening procedures, TSA did not conduct the necessary analysis of the data collected to determine the extent to which the removal of small scissors and tools from the prohibited items list could free up TSO resources. Specifically, TSA did not analyze the data on sharp objects confiscated at the checkpoint along with other relevant factors, such as the amount of time taken to search for scissors and the number of TSOs at the checkpoint conducting these searches, to determine the extent to which TSO resources could actually be freed up. Based on our analysis of TSA’s data for the 6-month period, where we considered these other relevant factors, we determined that TSOs spent, on average, less than 1 percent of their time—about 1 minute per day over the 6-month period— searching for the approximately 1.8 million sharp objects, other than knives and box cutters, that were found at passenger screening checkpoints between April 2005 and September 2005. If the average amount of time TSOs spent searching for sharp objects per day over a 6-month period was less than 1 minute per TSO, and sharp objects constituted just 19 percent of all prohibited items confiscated at checkpoints over this period, then it may not be accurate to assume that no longer requiring TSOs to search for small scissors and tools would significantly contribute to TSA’s efforts to free up TSO resources that could be used to implement other security measures. To further support its assertion that significant TSO resources would be freed up as a result of removing small scissors and tools from the list of prohibited items, TSA officials cited the results of an informal study conducted in October 2005—which was intended to provide a general idea of the types of prohibited items TSOs were finding as a result of their searches and how long various types of searches were taking TSOs to conduct. Specifically, according to the study conducted at 9 airports over a 14-day period, TSA determined that 24 percent of items found during carry-on bag searches were scissors. However, based on data regarding the number of bags searched, removing scissors may not significantly contribute to TSA’s efforts to free up TSO resources. TSA conducted additional informal studies 30, 60, and 90 days after the prohibited items list change went into effect to determine whether the change had resulted in reductions in the percentage of carry-on bags that were searched and overall screening time. However, we identified limitations in TSA’s methodology for conducting these studies. In February 2007, a TSA official stated that some FSDs interviewed several TSOs after the prohibited items list change went into effect, and these TSOs reported that the change did save screening time. However, TSA could not identify how many TSOs were interviewed, at which airports the TSOs were located, and how the TSOs were selected for the interview; nor did TSA document the results of these interviews. TSA also did not use random selection or representative sampling when determining which TSOs should be interviewed. Therefore, the interview results cannot be generalized. TSA officials acknowledged that they could have made some improvements in the various analyses they conducted on the prohibited items list change. However, they stated that they had to make SOP modifications quickly as part of the agency’s efforts to focus on higher threats, such as explosives, and the TSA Assistant Secretary’s goal of implementing the SOP modifications before the 2005 Thanksgiving holiday travel season. Additionally, officials stated that they continue to view their decision to remove small scissors and tools from the prohibited items list as sound, particularly because they believe small scissors and tools do not pose a significant threat to aviation security. TSA officials also stated that they believe the prohibited items list change would free up resources based on various sources of information, including the professional judgment of TSA airport staff, and their analysis of PMIS data on prohibited items confiscated at checkpoints. The TSA Assistant Secretary told us that even if TSA determined that the proposed SOP modifications would not free up existing TSO resources to conduct explosives detection procedures, he would have implemented the modifications anyway considering the added security benefit of the explosives detection procedures. Additionally, a TSA headquarters official responsible for airport security operations stated that to help strengthen the agency’s analysis of future proposed SOP changes, the agency plans to provide the Explosives Detection Improvement Task Force with the necessary resources to help improve its data collection and analysis. An additional measure intended to free up TSO resources involved changes to CAPPS rules. TSA’s assumption is that these changes could allow TSOs who were normally assigned to selectee screening duties to be reassigned to new procedures, such as USP, which may require new screening positions. (Both USP and SPOT require TSO positions: USP requires one screening position for every two screening lanes, while SPOT typically uses more than one screening position per ticket checker at the checkpoint.) According to FSDs we interviewed, the changes made to the prohibited items list and the CAPPS rules had not freed up existing TSO resources, as intended. Specifically, as of August 2006, 13 of 19 FSDs we interviewed at airports that tested USP or SPOT said that TSO resources were not freed up as a result of these changes. In addition, 9 of the 19 FSDs said that in order to operationally test USP or SPOT, TSOs had to work overtime, switch from other functions (such as checked baggage screening), or a screening lane had to be closed. TSA’s Explosives Detection Improvement Task Force reported that nearly all of the FSDs at airports participating in operational testing of USP believed that the procedure had security value, though the task force also reported that 1 FSD dropped out of the operational testing program for USP due to insufficient staffing resources and another could only implement the procedure during off-peak travel periods. Additionally, most of the FSDs we interviewed stated that the changes to the prohibited items list and CAPPS rules did not free up TSOs, as intended, to better enable TSOs to take required explosives detection training. Specifically, as of August 2006, of the 19 FSDs we interviewed at airports that implemented USP and SPOT, 13 said that they did not experience more time to conduct explosives training as a result of changes to the prohibited items list and CAPPS rules. Three of the 13 FSDs said that they used overtime to enable TSOs to take the explosives training. As previously stated, the TSA Assistant Secretary stated that even if existing TSO resources are not freed up to conduct explosives detection procedures, longer lines and wait times at airport checkpoints are an acceptable consequence, considering their added security benefit. With regard to explosives training, he stated that it is acceptable for FSDs to use overtime or other methods to ensure that all TSOs participated in the required explosives detection training. He further noted that even if one screening change does not free up TSO resources, all of the changes intended to accomplish this—when taken together—should ultimately help to redirect TSO resources to where they are most needed. TSA’s efforts to add data and metrics to its tool kit for evaluating the impact of screener changes are a good way to supplement the use of professional judgment and input from other experts and sources in making decisions about modifying screening procedures. However, TSA’s methods for data collection and analysis could be improved. We recognize the challenges TSA faces in evaluating the effectiveness of proposed procedures, particularly when faced with time pressures to implement procedures. However, by attempting to evaluate the potential impact of screening changes on security and resource availability, TSA could help support its decision making on how best to allocate limited TSO resources and ensure that the ability to detect explosives and other high-threat objects during the passenger screening process is enhanced. While we were able to assess TSA’s reasoning behind certain proposed SOP modifications considered during our review period, our analysis was limited because TSA did not maintain complete documentation of proposed SOP modifications. Documentation of the reasoning behind decisions to implement or reject proposed modifications was maintained in various formats, including spreadsheets developed by TSA officials, internal electronic mail discussions among TSA officials, internal memorandums, briefing slides, and reports generated based on the results of operational testing. TSA did improve its documentation of the proposed SOP modifications that were considered during the latter part of our 9-month review period. Specifically, the documentation for the SOP modifications proposed under the Explosives Detection Improvement Task Force provided more details regarding the basis of the proposed modifications and the reasoning behind decisions to implement or reject the proposed modifications. Of the 92 proposed SOP modifications considered during our 9-month review period that TSA documented, TSA provided the basis for 72. More specifically, TSA documented the basis—that is, the information, experience, or event that encouraged TSA officials to propose an SOP modification—for 35 of the 48 that were implemented and for 37 of the 44 that were not implemented. However, TSA only documented the reasoning behind TSA senior leadership’s decisions to implement or not implement proposed SOP modifications for 43 of 92 proposed modifications. According to TSA officials, documentation that explains the basis for recommending proposed modifications can also be used to explain TSA’s reasoning behind its decisions to implement proposed modifications. However, the basis on which an SOP modification was proposed cannot always be used to explain TSA senior leadership’s decisions not to implement a proposed modification. In these cases, additional documentation would be needed to understand TSA’s decision making. However, TSA only documented the reasoning behind its decisions for about half (26 of 44) of the proposed modifications that were not implemented. TSA officials told us that they did not intend to document all SOP modifications that were proposed during our review period. Officials stated that, in some cases, the reasoning behind TSA’s decision to implement or not implement a proposed SOP modification is obvious and documentation is not needed. TSA officials acknowledged that it is beneficial to maintain documentation on the reasoning behind decisions to implement or reject proposed SOP modifications deemed significant, particularly given the organizational restructuring and staff turnover within TSA. However, TSA officials could not identify which of the 92 proposed SOP modifications they consider to be significant because they do not categorize proposed modifications in this way. Our standards for governmental internal controls and associated guidance suggest that agencies should document key decisions in a way that is complete and accurate, and that allows decisions to be traced from initiation, through processing, to after completion. These standards further state that documentation of key decisions should be readily available for review. Without documenting this type of information, TSA cannot always justify significant modifications to passenger checkpoint screening procedures to internal or external stakeholders, including Congress and the traveling public. In addition, considering the ongoing personnel changes, without sufficient documentation, future decision makers in TSA may not know on what basis the agency historically made decisions to develop new or revise existing screening procedures. Following our 9-month review period, TSA continued to make efforts to improve documentation of agency decision making, as evidenced by decisions regarding the August 2006 and September 2006 SOP modifications related to the screening of liquids and gels. For example, TSA senior leadership evaluated the actions taken by the agency between August 7 and August 13, 2006, in response to the alleged liquid explosives terrorist plot, in order to identify lessons learned and improve the agency’s reaction to future security incidents. As a result of this evaluation, as shown in table 4, TSA made several observations and recommendations for improving documentation of agency decision making when considering modifications to screening procedures. Documentation of TSA’s decisions regarding the September 26, 2006, modifications to the liquid screening procedures showed that TSA had begun implementing the recommendations in table 4. TSA’s documentation identified the various proposed liquid screening procedures that were considered by TSA, the benefits and drawbacks of each proposal, and the rationale behind TSA’s final decision regarding which proposal to implement. The documentation also tracked the timing of TSA’s deliberations of each of the proposed liquid screening procedures. However, the documentation of TSA’s decisions was not always presented in a standard format, nor was the origin and use of supporting documentation always identified. TSA officials acknowledged that documentation of the September 2006 SOP modifications could have been improved and stated that efforts to improve documentation, through implementation of the recommendations in table 4, will continue to be a high priority. TSA implemented a performance accountability system in part to strengthen its monitoring of TSO compliance with passenger checkpoint screening SOPs. Specifically, in April 2006, TSA implemented the Performance Accountability and Standards System (PASS) to assess the performance of all TSA employees, including TSOs. According to TSA officials, PASS was developed in response to our 2003 report that recommended that TSA establish a performance management system that makes meaningful distinctions in employee performance, and in response to input from TSA airport staff on how to improve passenger and checked baggage screening measures. With regard to TSOs, PASS is not intended solely to measure TSO compliance with SOPs. Rather, PASS will be used by TSA to assess agency personnel at all levels on various competencies, including training and development, readiness for duty, management skills, and technical proficiency. There are three elements of the TSO technical proficiency component of PASS that are intended to measure TSO compliance with passenger checkpoint screening procedures: (1) quarterly observations conducted by FSD management staff of TSOs’ ability to perform particular screening functions in the operational environment, such as pat-down searches and use of the hand-held metal detector, to ensure they are complying with checkpoint screening SOPs; (2) quarterly quizzes given to TSOs to assess their knowledge of the SOPs; and (3) an annual, multipart knowledge and skills assessment. While the first two elements are newly developed, the third element—the knowledge and skills assessment—is part of the annual TSO recertification program that is required by the Aviation and Transportation Security Act (ATSA) and has been in place since October 2003. Collectively, these three elements of PASS are intended to provide a systematic method for monitoring whether TSOs are screening passengers and their carry-on items according to SOPs. TSA’s implementation of PASS is consistent with our internal control standards, which state that agencies should ensure that policies and procedures are applied properly. The first component of PASS (quarterly observations) is conducted by screening supervisors or screening managers, using a standard checklist developed by TSA headquarters, with input from TSA airport staff. There is one checklist used for each screening function, and TSOs are evaluated on one screening function per quarter. For example, the hand-held metal detector skills observation checklist includes 37 tasks to be observed, such as whether the TSO conducted a pat-down search to resolve any suspect areas. The second component of PASS (quarterly quizzes) consists of multiple-choice questions on the standard operating procedures. For example, one of the questions on the PASS quiz is “What is the correct place to start an HHMD outline on an individual: (a) top of the head, (b) top of the feet, or (c) top of the shoulder?” The third component of PASS is the annual knowledge and skills assessment, a component of the annual recertification program that evaluates the technical proficiency of TSOs. This assessment is composed of three modules: (1) knowledge of standard operating procedures, (2) recognition of threat objects on an X-ray image, and (3) demonstration of screening functions. According to TSA officials, while recertification testing is not a direct measure of operational compliance with passenger checkpoint screening SOPs, recertification testing, particularly module 1 and module 3, is an indicator of whether TSOs are capable of complying with SOPs. TSA officials stated that if a TSO does not have knowledge of SOPs and if the TSO cannot demonstrate basic screening functions as outlined in the SOPs, then the TSO will likely not be able to comply with SOPs when performing in the operating environment. Table 5 provides a summary of each of these modules. FSDs we interviewed reported that they have faced resource challenges in implementing PASS. Specifically, as of July 2006, 9 of 24 FSDs we interviewed said they experienced difficulties in implementing PASS due to lack of available staff to conduct the compliance-related evaluations. TSA officials stated that they have automated many of the data-entry functions of PASS to relieve the field of the burden of manually entering this information into the PASS online system. For example, all scores related to the quarterly quiz and skill observation components are automatically uploaded, and PASS is linked to TSA’s online learning center database to eliminate the need to manually enter TSOs’ learning history. In addition, the TSA Assistant Secretary said that FSDs were given the option of delaying implementation of PASS if they were experiencing resource challenges. TSA also conducts local and national covert tests, which are used to evaluate, in part, the extent to which noncompliance with the SOPs affects TSOs’ ability to detect simulated threat items hidden in accessible property or concealed on a person. TSA first issued guidance on its local covert testing program—known as Screener Training Exercises and Assessments (STEA)—in February 2004. STEA testing is conducted by FSD staff at airports, who determine the frequency at which STEA tests are conducted as well as which type of STEA tests are conducted. According to the STEA results reported by TSA between March 2004 and February 2006, TSOs’ noncompliance with the SOP accounted for some of the STEA test failures. TSOs’ lack of proficiency in skills or procedures, which may affect TSOs’ ability to comply with procedures, was also cited as the reason for some of the STEA test failures. TSOs who fail STEA tests are required to take remedial training to help them address the reasons for their failure. FSDs we interviewed reported that they have faced resource challenges in conducting STEA tests. Specifically, even though all 24 FSDs we interviewed as of July 2006 said that they have conducted STEA tests, 10 of these FSDs said that the lack of available staff made it difficult to conduct these tests. When asked how they planned to address FSDs’ concerns regarding a lack of available staff to complete STEA tests, TSA headquarters officials told us that they are considering resource alternatives for implementing the STEA program, but could not provide us with the specific details of these plans. Until the resource limitations that have restricted TSA’s use of its compliance monitoring tools have been fully addressed, TSA may not have assurance that TSOs are screening passengers according to the SOP. As previously discussed, TSA’s Office of Inspection initiated its national covert testing program in September 2002. National covert tests are conducted by TSA headquarters-based inspectors who carry simulated threat objects hidden in accessible property or concealed on their person through airport checkpoints, and in cases where TSOs fail to detect threat objects, the inspectors identify the reasons for failure. During September 2005, TSA implemented a revised covert testing program to focus more on catastrophic threats—threats that can bring down or destroy an aircraft. According to Office of Inspection officials, TSOs may fail to detect threat objects during covert testing for various reasons, including limitations in screening technology, lack of training, limitations in the procedures TSOs must follow to conduct passenger and bag searches, and TSOs’ noncompliance with screening checkpoint SOPs. Office of Inspection officials also said that one test could be failed due to multiple factors, and that it is difficult to determine the extent to which any one factor contributed to the failure. TSOs who fail national covert tests, like those who fail STEA tests, are also required to take remedial training to help them address the reasons for failure. The alleged August 2006 terrorist plot to detonate liquid explosives onboard multiple U.S.-bound aircraft highlighted the need for TSA to continuously reassess and revise, when deemed appropriate, existing passenger checkpoint screening procedures to address threats against the commercial aviation system. In doing so, TSA faces the challenge of securing the aviation system while facilitating the free movement of people. Passenger screening procedures are only one element that affects the effectiveness and efficiency of the passenger checkpoint screening system. Securing the passenger checkpoint screening system also involves the TSOs who are responsible for conducting the screening of airline passengers and their carry-on items, and the technology used to screen passengers and their carry-on items. We believe that TSA has implemented a reasonable approach to modifying passenger checkpoint screening procedures through its consideration of risk factors (threat and vulnerability information), day-to-day experience of TSA airport staff, and complaints and concerns raised by passengers and by making efforts to balance security, efficiency, and customer service. We are also encouraged by TSA’s efforts to conduct operational testing and use data and metrics to support its decisions to modify screening procedures. We acknowledge the difficulties in assessing the impact of proposed screening procedures, particularly with regard to the extent to which proposed procedures would deter terrorists from attempting to carry out an attack onboard a commercial aircraft. However, there are existing methods, such as covert testing and CBP’s COMPEX—a method that evaluates the effectiveness of CBP’s procedures for selecting international airline passengers for secondary screening—that could be used by TSA to assess whether proposed screening procedures enhance detection capability. It is also important for TSA to fully assess available data to determine the extent to which TSO resources would be freed up to perform higher-priority procedures, when this is the intended effect. Without collecting the necessary data or conducting the necessary analysis that would enable the agency to assess whether proposed SOP modifications would have the intended effect, it may be difficult for TSA to determine how best to improve TSOs’ ability to detect explosives and other high-threat items and to allocate limited TSO resources. With such data and analysis, TSA would be in a better position to justify its SOP modifications and to have a better understanding of how the changes affect TSO resources. Additionally, because TSA did not always document the basis on which SOP modifications were proposed or the reasoning behind decisions to implement or not implement proposed modifications, TSA may not be able to justify SOP modifications to Congress and the traveling public. While we are encouraged that TSA’s documentation of its decisions regarding the SOP modifications made in response to the alleged August 2006 liquid explosives terrorist plot was improved compared to earlier documentation, it is important for TSA to continue to work to strengthen its documentation efforts. Such improvements would enable TSA officials responsible for making SOP decisions in the future to understand how significant SOP decisions were made historically— a particular concern considering the restructuring and staff turnover experienced by TSA. As shown by TSA’s covert testing results, the effectiveness of passenger checkpoint screening relies, in part, on TSOs’ compliance with screening procedures. We are, therefore, encouraged by TSA’s efforts to strengthen its monitoring of TSO compliance with passenger screening procedures. We believe that TSA has implemented a reasonable process for monitoring TSO compliance and that this effort should assist TSA in providing reasonable assurance that TSOs are screening passengers and their carry-on items according to screening procedures. Given the resource challenges FSDs identified in implementing the various methods for monitoring TSO compliance, it will be important for TSA to take steps, such as automating PASS data entry functions, to address such challenges. To help strengthen TSA’s evaluation of proposed modifications to passenger checkpoint screening SOPs and TSA’s ability to justify its decisions to implement or not implement proposed SOP modifications, in the March 2007 report that contained sensitive security information, we recommended that the Secretary of Homeland Security direct the Assistant Secretary of Homeland Security for TSA to take the following two actions: when operationally testing proposed SOP modifications, develop sound evaluation methods, when possible, that can be used to assist TSA in determining whether proposed procedures would achieve their intended result, such as enhancing TSA’s ability to detect prohibited items and suspicious persons and freeing up existing TSO resources that could be used to implement proposed procedures, and for future proposed SOP modifications that TSA senior leadership determines are significant, generate and maintain documentation to include, at minimum, the source, intended purpose, and reasoning behind decisions to implement or not implement proposed modifications. On March 6, 2007, we received written comments on the draft report, which are reproduced in full in appendix III. DHS generally concurred with our recommendations and outlined actions TSA plans to take to implement the recommendations. DHS stated that it appreciates GAO’s conclusion that TSA has implemented a reasonable approach to modifying passenger checkpoint screening procedures through its assessment of risk factors, the expertise of TSA employees, and input from the traveling public and other stakeholders, as well as TSA’s efforts to balance security, operational efficiency, and customer service while evaluating proposed changes. With regard to our recommendation to develop sound evaluation methods, when possible, to help determine whether proposed SOP modifications would achieve their intended result, DHS stated that TSA plans to make better use of generally accepted research design principles and techniques when operationally testing proposed SOP modifications. For example, TSA will consider using random selection, representative sampling, and control groups in order to isolate the impact of proposed SOP modifications from the impact of other variables. DHS also stated that TSA’s Office of Security Operations is working with subject matter experts to ensure that operational tests are well designed and executed, and produce results that are scientifically valid and reliable. As discussed in this report, employing sound evaluation methods for operationally testing proposed SOP modifications will enable TSA to have better assurance that new passenger checkpoint screening procedures will achieve their intended purpose, which may include improved allocation of limited TSO resources and enhancing detection of explosives and other high-threat objects during the passenger screening process. However, DHS stated, and we agree, that the need to make immediate SOP modifications in response to imminent terrorist threats may preclude operational testing of some proposed modifications. Concerning our recommendation regarding improved documentation of proposed SOP modifications, DHS stated that TSA intends to document the source, intent, and reasoning behind decisions to implement or reject proposed SOP modifications that TSA senior leadership deems significant. Documenting this type of information will enable TSA to justify significant modifications to passenger checkpoint screening procedures to internal and external stakeholders, including Congress and the traveling public. In addition, considering the ongoing personnel changes TSA has experienced, such documentation should enable future decision makers in TSA to understand on what basis the agency historically made decisions to develop new or revise existing screening procedures. In addition to commenting on our recommendations, DHS provided comments on some of our findings, which we considered and incorporated in the report where appropriate. One of DHS’s comments pertained to TSA’s evaluation of the prohibited items list change. Specifically, while TSA agrees that the agency could have conducted a more methodologically sound evaluation of the impact of the prohibited items list change, TSA disagrees with our assessment that the prohibited items list change may not have significantly contributed to TSA’s efforts to free up TSO resources to focus on detection of high-threat items, such as explosives. As we identified in this report, based on interviews with FSDs, airport visits to determine the types of items confiscated at checkpoints, and a study to determine the amount of time taken to conduct bag searches and the number of sharp objects collected as a result of these searches, TSA concluded that the prohibited items list change would free up TSO resources. DHS also stated that interviews with TSOs following the prohibited items list change confirmed that the change had freed up TSO resources. However, based on our analysis of the data TSA collected both prior to and following the prohibited items list change, we continue to believe that TSA did not conduct the necessary analysis to determine the extent to which the removal of small scissors and tools from the prohibited items list would free up TSA resources. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 21 days from the date of this report. At that time, we will send copies of the report to the Secretary of the Department of Homeland Security, the TSA Assistant Secretary, and interested congressional committees as appropriate. We will also make copies available to others on request. If you or your staff have any questions about this report, please contact me at (202) 512-3404 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made major contributions to this report are listed in appendix IV. To assess the Transportation Security Administration’s (TSA) process for modifying passenger checkpoint screening procedures and how TSA monitors compliance with these procedures, we addressed the following questions: (1) How and on what basis did TSA modify passenger screening procedures and what factors guided the decisions to do so? (2) How does TSA determine whether TSOs are complying with the standard procedures for screening passengers and their carry-on items? To address how TSA modified passenger screening procedures and what factors guided the decisions to do so, we obtained and analyzed documentation of proposed standard operating procedures (SOP) changes considered between April 2005 and September 2005, as well as threat assessments and operational studies that supported SOP modifications. The documentation included a list of proposed changes considered, as well as the source, the intended purpose, and in some cases the basis for recommending the SOP modification—that is, the information, experience, or event that encouraged TSA officials to propose the modifications—and the reasoning behind decisions to implement or reject proposed SOP modifications. We also obtained documentation of the proposed SOP changes considered by TSA’s Explosives Detection Improvement Task Force, which was the deliberating body for proposed changes that were considered between October 2005 and December 2005. We also reviewed and analyzed similar documentation for proposed SOP modifications considered between August 2006 and November 2006 in response to the alleged terrorist plot to detonate liquid explosives onboard multiple aircraft en route from the United Kingdom to the United States. We included modifications to passenger checkpoint screening procedures related to this particular event because they provided the most recent information available of TSA’s approach to modifying screening procedures in response to an immediate perceived threat to civil aviation. The documentation included notes from internal meetings, slides for internal and external briefings on proposed SOP modifications, data on customer complaints and screening efficiency, and the results of liquid explosives testing conducted by the Department of Homeland Security (DHS) Science and Technology Directorate and the Federal Bureau of Investigation (FBI). We also obtained each revision of the passenger checkpoint screening SOP that was generated between April 2005 and December 2005 and August 2006 and November 2006, as well as accompanying documentation that highlighted all of the changes made in each revision. In addition, we met with TSA headquarters officials who were involved in the process for determining whether proposed passenger checkpoint screening procedures should be implemented. We also met with officials in the DHS Science and Technology Directorate as well as the FBI to discuss the methodology and results of their liquid explosives tests, which were used to support TSA’s decisions to modify the SOP in September 2006. We also met with TSA Office of Inspection and DHS Office of Inspector General staff to discuss their covert testing at passenger checkpoints and the recommended changes to the passenger checkpoint screening SOP that were generated based on testing results. We also obtained and analyzed data and information collected by TSA on the proposed procedures that were evaluated in the operational environment. In addition, we met or conducted phone interviews with Federal Security Directors (FSD) and their management staff, including Assistant FSDs and Screening Managers, and Transportation Security Officers (TSO) with passenger screening responsibilities, at 25 commercial airports to gain their perspectives on TSA’s approach to revising the passenger checkpoint screening SOP. We also met with officials from four aviation associations—the American Association of Airport Executives, Airports Council International, the Air Transport Association, and the Regional Airline Association—to gain their perspectives on this objective. Finally, we met with five aviation security experts to obtain their views on methods for assessing the impact of proposed passenger checkpoint screening procedures. We selected these experts based on their depth of experience in the field of aviation security, employment history, and their recognition in the aviation security community. However, the views of these experts may not necessarily represent the general view of other experts in the field of aviation security. We compared TSA’s approach to revising its passenger checkpoint screening SOP with the Comptroller General’s standards for internal control in the federal government and risk management guidance. To address how TSA determines whether TSOs are complying with the standard procedures for screening passengers and their carry-on items, we obtained documentation of compliance-related initiatives, including guidance, checklists, and SOP quizzes used to assess TSO compliance under the Performance Accountability and Standards System (PASS), and guidance provided to FSDs for developing local compliance audit programs. We also obtained the fiscal year 2005 recertification and Screener Training Exercises and Assessments (STEA) test results, which were used, in part, to assess TSO compliance with and knowledge of the passenger checkpoint screening SOP. In addition, we reviewed the results of covert testing conducted by TSA’s Office of Inspection, which were also used, in part, to assess TSO compliance with the passenger checkpoint screening SOP. We assessed the reliability of the compliance-related data we received from TSA, and found the data to be sufficiently reliable for our purposes. In addition, we interviewed TSA headquarters officials who were responsible for overseeing efforts to monitor TSO compliance with standard operating procedures. This included officials in the Office of Security Operations, Office of Human Capital, and the Office of Operational Process and Technology. Our audit work also included visits to or phone conferences with 25 airports, where we interviewed FSDs, members of their management teams, and Transportation Security Officers with passenger screening responsibilities. However, the perspectives of these FSDs and their staff cannot be generalized across all airports. In July 2006, we submitted two sets of follow-up questions to FSD staff, related to their experiences with implementing PASS and STEA tests. We also obtained documentation of local compliance audit programs from the FSD staff at several of these airports. We compared TSA’s approach for monitoring TSO compliance with the Comptroller General’s standards for internal control in the federal government. As previously mentioned, we conducted site visits and/or phone interviews at 25 airports (8 category X airports, 7 category I airports, 4 category II airports, 4 category III airports, and 2 category IV airports) to discuss issues related to TSA’s approach to revising the passenger checkpoint screening SOP, and the agency’s approach to monitoring TSO compliance with the SOP. We visited 7 of these airports during the design phase of our study. These airports were selected based on variations in size and geographic location, and whether they were operationally testing any proposed passenger checkpoint screening procedures or passenger screening technology. We also selected 2 airports that participated in the Screening Partnership Program. After visiting the 7 airports during the design phase of our review, we selected an additional 15 airports to visit based on variations in size, geographic distribution, and performance on compliance-related assessments. Specifically, we obtained and analyzed fiscal year 2005 Screener Training Exercise and Assessments results and fiscal year 2005 recertification testing results to identify airports across a range of STEA and recertification scores. Additionally, we visited 3 additional airports that operationally tested the proposed Unpredictable Screening Process (USP) and the Screening Passengers by Observation Technique (SPOT) procedure. In July 2006, we received from 19 FSDs answers to follow-up questions on their experiences with implementing pilot testing of SPOT or USP. This included 14 FSDs that were not part of our initial rounds of interviews. Nine of these 14 FSDs were from airports that participated in SPOT pilots. The remaining 5 of 14 FSDs that were not part of our initial rounds of interviews were from airports that were participants in USP pilots. We conducted our work from March 2005 through March 2007 in accordance with generally accepted government auditing standards. Of the 92 proposed screening changes considered by TSA between April 2005 and December 2005, 63 were submitted by TSA field staff, including Federal Security Directors and Transportation Security Officers. Thirty proposed screening changes were submitted by TSA headquarters officials. Last, TSA senior leadership, such as the TSA Assistant Secretary, recommended 5 of the 92 proposed screening changes considered during this time period. One SOP modification was also proposed through a congressional inquiry. TSA’s solicitation of input from both field and headquarters officials regarding changes to the passenger checkpoint screening SOP was consistent with internal control standards, which suggest that there be mechanisms in place for employees to recommend improvements in operations. The FSDs with whom we met most frequently identified periodic conference calls with the Assistant Secretary, the SOP Question and Answer mailbox, or electronic mail to Security Operations officials as the mechanisms by which they recommended changes to the SOP. The TSOs with whom we met identified their chain of command and the SOP Question and Answer mailbox as the primary mechanisms by which they submitted suggestions for new or revised procedures. According to TSA officials, the SOP mailbox entails FSDs and their staff, including TSOs, submitting suggestions, questions, or comments to TSA’s Security Operations division via electronic mail, either directly or through their supervisors. Submissions are then compiled and reviewed by a single Security Operations official, who generates responses to the questions that have clear answers. However, for submissions for which the appropriate response is not obvious or for submissions that include a suggestion to revise the SOP, this official forwards the submissions to other Security Operations officials for further deliberation. SOP mailbox responses are provided to all TSA airport officials. If TSA headquarters revised a screening procedure based on a mailbox submission, the revision is noted in the mailbox response. Thirty of the screening changes considered by TSA between April 2005 and December 2005 were proposed by TSA headquarters officials, including Security Operations officials, who are responsible for overseeing implementation of checkpoint screening. According to Security Operations officials, they recommended changes to checkpoint screening procedures based on communications with TSA field officials and airport optimization reviews. Security Operations officials conduct optimization reviews to identify best practices and deficiencies in the checkpoint screening and checked baggage screening processes. As part of these reviews, Security Operations officials may also assess screening efficiency and whether TSOs are implementing screening procedures correctly. Other TSA headquarters divisions also suggested changes to passenger checkpoint screening procedures. For example, the Office of Law Enforcement recommended that there be an alternative screening procedure for law enforcement officials who are escorting prisoners or protectees. Previously, all armed law enforcement officers were required to sign a logbook at the screening checkpoint, prior to entering the sterile area of the airport. The officials in the Office of Passengers with Disabilities also recommended changes to checkpoint screening procedures. For example, in the interest of disabled passengers, they suggested that TSOs be required to refasten all wheelchair straps and buckles undone during the screening process. Last, TSA senior leadership suggested 5 of the 92 procedural changes considered by TSA between April 2005 and December 2005. TSA senior leadership also proposed a procedure that would allow TSOs to conduct the pat-down procedure on passengers of the opposite gender at airports with a disproportionate ratio of male and female TSOs. In addition to the person named above, Maria Strudwick, Assistant Director; David Alexander; Christopher W. Backley; Amy Bernstein; Kristy Brown; Yvette Gutierrez-Thomas; Katherine N. Haeberle; Robert D. Herring; Richard Hung; Christopher Jones, Stanley Kostyla; and Laina Poon made key contributions to this report. Aviation Security: TSA's Staffing Allocation Model Is Useful for Allocating Staff among Airports, but Its Assumptions Should Be Systematically Reassessed. GAO-07-299. Washington, D.C.: February 28, 2007 Aviation Security: Progress Made in Systematic Planning to Guide Key Investment Decisions, but More Work Remains. GAO-07-448T. Washington, D.C.: February 13, 2007. Homeland Security: Progress Has Been Made to Address the Vulnerabilities Exposed by 9/11, but Continued Federal Action Is Needed to Further Mitigate Security Risks. GAO-07-375. Washington, D.C.: January 24, 2007. Aviation Security: TSA Oversight of Checked Baggage Screening Procedures Could Be Strengthened GAO-06-869. Washington, D.C.: July 28, 2006. Aviation Security: TSA Has Strengthened Efforts to Plan for the Optimal Deployment of Checked Baggage Screening Systems, but Funding Uncertainties Remain GAO-06-875T. Washington, D.C.: June 29, 2006. Aviation Security: Management Challenges Remain for the Transportation Security Administration’s Secure Flight Program. GAO-06-864T. Washington, D.C.: June 14, 2006. Aviation Security: Further Study of Safety and Effectiveness and Better Management Controls Needed if Air Carriers Resume Interest in Deploying Less-than-Lethal Weapons. GAO-06-475. Washington, D.C.: May 26, 2006. Aviation Security: Enhancements Made in Passenger and Checked Baggage Screening, but Challenges Remain. GAO-06-371T. Washington, D.C.: April 4, 2006. Aviation Security: Transportation Security Administration Has Made Progress in Managing a Federal Security Workforce and Ensuring Security at U.S. Airports, but Challenges Remain. GAO-06-597T. Washington, D.C.: April 4, 2006. Aviation Security: Progress Made to Set Up Program Using Private- Sector Airport Screeners, but More Work Remains. GAO-06-166. Washington, D.C.: March 31, 2006. Aviation Security: Significant Management Challenges May Adversely Affect Implementation of the Transportation Security Administration’s Secure Flight Program. GAO-06-374T. Washington, D.C.: February 9, 2006. Aviation Security: Federal Air Marshal Service Could Benefit from Improved Planning and Controls. GAO-06-203. Washington, D.C.: November 28, 2005. Aviation Security: Federal Action Needed to Strengthen Domestic Air Cargo Security. GAO-06-76. Washington, D.C.: October 17, 2005. Transportation Security Administration: More Clarity on the Authority of Federal Security Directors Is Needed. GAO-05-935. Washington, D.C.: September 23, 2005. Aviation Security: Flight and Cabin Crew Member Security Training Strengthened, but Better Planning and Internal Controls Needed. GAO-05-781. Washington, D.C.: September 6, 2005. Aviation Security: Transportation Security Administration Did Not Fully Disclose Uses of Personal Information during Secure Flight Program Testing in Initial Privacy Notes, but Has Recently Taken Steps to More Fully Inform the Public. GAO-05-864R. Washington, D.C.: July 22, 2005. Aviation Security: Better Planning Needed to Optimize Deployment of Checked Baggage Screening Systems. GAO-05-896T. Washington, D.C.: July 13, 2005. Aviation Security: Screener Training and Performance Measurement Strengthened, but More Work Remains. GAO-05-457. Washington, D.C.: May 2, 2005. Aviation Security: Secure Flight Development and Testing Under Way, but Risks Should Be Managed as System Is Further Developed. GAO-05-356. Washington, D.C.: March 28, 2005. Aviation Security: Systematic Planning Needed to Optimize the Deployment of Checked Baggage Screening Systems. GAO-05-365. Washington, D.C.: March 15, 2005. Aviation Security: Measures for Testing the Effect of Using Commercial Data for the Secure Flight Program. GAO-05-324. Washington, D.C.: February 23, 2005. Transportation Security: Systematic Planning Needed to Optimize Resources. GAO-05-357T. Washington, D.C.: February 15, 2005. Aviation Security: Preliminary Observations on TSA’s Progress to Allow Airports to Use Private Passenger and Baggage Screening Services. GAO-05-126. Washington, D.C.: November 19, 2004. General Aviation Security: Increased Federal Oversight Is Needed, but Continued Partnership with the Private Sector Is Critical to Long-Term Success. GAO-05-144. Washington, D.C.: November 10, 2004. Aviation Security: Further Steps Needed to Strengthen the Security of Commercial Airport Perimeters and Access Controls. GAO-04-728. Washington, D.C.: June 4, 2004. Transportation Security Administration: High-Level Attention Needed to Strengthen Acquisition Function. GAO-04-544. Washington, D.C.: May 28, 2004. Aviation Security: Challenges in Using Biometric Technologies. GAO-04-785T. Washington, D.C.: May 19, 2004. Nonproliferation: Further Improvements Needed in U.S. Efforts to Counter Threats from Man-Portable Air Defense Systems. GAO-04-519. Washington, D.C.: May 13, 2004. Aviation Security: Private Screening Contractors Have Little Flexibility to Implement Innovative Approaches. GAO-04-505T. Washington, D.C.: April 22, 2004. Aviation Security: Improvement Still Needed in Federal Aviation Security Efforts. GAO-04-592T. Washington, D.C.: March 30, 2004. Aviation Security: Challenges Delay Implementation of Computer- Assisted Passenger Prescreening System. GAO-04-504T. Washington, D.C.: March 17, 2004. Aviation Security: Factors Could Limit the Effectiveness of the Transportation Security Administration’s Efforts to Secure Aerial Advertising Operations. GAO-04-499R. Washington, D.C.: March 5, 2004. Aviation Security: Computer-Assisted Passenger Prescreening System Faces Significant Implementation Challenges. GAO-04-385. Washington, D.C.: February 13, 2004. Aviation Security: Challenges Exist in Stabilizing and Enhancing Passenger and Baggage Screening Operations. GAO-04-440T. Washington, D.C.: February 12, 2004. The Department of Homeland Security Needs to Fully Adopt a Knowledge-based Approach to Its Counter-MANPADS Development Program. GAO-04-341R. Washington, D.C.: January 30, 2004. Aviation Security: Efforts to Measure Effectiveness and Strengthen Security Programs. GAO-04-285T. Washington, D.C.: November 20, 2003. Aviation Security: Federal Air Marshal Service Is Addressing Challenges of Its Expanded Mission and Workforce, but Additional Actions Needed. GAO-04-242. Washington, D.C.: November 19, 2003. Aviation Security: Efforts to Measure Effectiveness and Address Challenges. GAO-04-232T. Washington, D.C.: November 5, 2003. Airport Passenger Screening: Preliminary Observations on Progress Made and Challenges Remaining. GAO-03-1173. Washington, D.C.: September 24, 2003. Aviation Security: Progress since September 11, 2001, and the Challenges Ahead. GAO-03-1150T. Washington, D.C.: September 9, 2003. Transportation Security: Federal Action Needed to Enhance Security Efforts. GAO-03-1154T. Washington, D.C.: September 9, 2003. Transportation Security: Federal Action Needed to Help Address Security Challenges. GAO-03-843. Washington, D.C.: June 30, 2003. Federal Aviation Administration: Reauthorization Provides Opportunities to Address Key Agency Challenges. GAO-03-653T. Washington, D.C.: April 10, 2003. Transportation Security: Post-September 11th Initiatives and Long- Term Challenges. GAO-03-616T. Washington, D.C.: April 1, 2003. Airport Finance: Past Funding Levels May Not Be Sufficient to Cover Airports’ Planned Capital Development. GAO-03-497T. Washington, D.C.: February 25, 2003. Transportation Security Administration: Actions and Plans to Build a Results-Oriented Culture. GAO-03-190. Washington, D.C.: January 17, 2003. Aviation Safety: Undeclared Air Shipments of Dangerous Goods and DOT’s Enforcement Approach. GAO-03-22. Washington, D.C.: January 10, 2003. Aviation Security: Vulnerabilities and Potential Improvements for the Air Cargo System. GAO-03-344. Washington, D.C.: December 20, 2002. Aviation Security: Registered Traveler Program Policy and Implementation Issues. GAO-03-253. Washington, D.C.: November 22, 2002. Airport Finance: Using Airport Grant Funds for Security Projects Has Affected Some Development Projects. GAO-03-27. Washington, D.C.: October 15, 2002. Commercial Aviation: Financial Condition and Industry Responses Affect Competition. GAO-03-171T. Washington, D.C.: October 2, 2002. Aviation Security: Transportation Security Administration Faces Immediate and Long-Term Challenges. GAO-02-971T. Washington, D.C.: July 25, 2002. Aviation Security: Information Concerning the Arming of Commercial Pilots. GAO-02-822R. Washington, D.C.: June 28, 2002. Aviation Security: Vulnerabilities in, and Alternatives for, Preboard Screening Security Operations. GAO-01-1171T. Washington, D.C.: September 25, 2001. Aviation Security: Weaknesses in Airport Security and Options for Assigning Screening Responsibilities. GAO-01-1165T. Washington, D.C.: September 21, 2001. Homeland Security: A Framework for Addressing the Nation’s Efforts. GAO-01-1158T. Washington, D.C.: September 21, 2001. Aviation Security: Terrorist Acts Demonstrate Urgent Need to Improve Security at the Nation’s Airports. GAO-01-1162T. Washington, D.C.: September 20, 2001. Aviation Security: Terrorist Acts Illustrate Severe Weaknesses in Aviation Security. GAO-01-1166T. Washington, D.C.: September 20, 2001. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Transportation Security Administration's (TSA) most visible layer of commercial aviation security is the screening of airline passengers at airport checkpoints, where travelers and their carry-on items are screened for explosives and other dangerous items by transportation security officers (TSO). Several revisions made to checkpoint screening procedures have been scrutinized and questioned by the traveling public and Congress in recent years. For this review, GAO evaluated (1) TSA's decisions to modify passenger screening procedures between April 2005 and December 2005 and in response to the alleged August 2006 liquid explosives terrorist plot, and (2) how TSA monitored TSO compliance with passenger screening procedures. To conduct this work, GAO reviewed TSA documents, interviewed TSA officials and aviation security experts, and visited 25 airports of varying sizes and locations. Between April 2005 and December 2005, proposed modifications to passenger checkpoint screening standard operating procedures (SOP) were made for a variety of reasons, and while a majority of the proposed modifications--48 of 92--were ultimately implemented at airports, TSA's methods for evaluating and documenting them could be improved. SOP modifications were proposed based on the professional judgment of TSA senior-level officials and program-level staff. TSA considered the daily experiences of airport staff, complaints and concerns raised by the traveling public, and analysis of risks to the aviation system when proposing SOP modifications. TSA also made efforts to balance the impact on security, efficiency, and customer service when deciding which proposed modifications to implement, as in the case of the SOP changes made in response to the alleged August 2006 liquid explosives terrorist plot. In some cases, TSA tested proposed modifications at selected airports to help determine whether the changes would achieve their intended purpose. However, TSA's data collection and analyses could be improved to help TSA determine whether proposed procedures that are operationally tested would achieve their intended purpose. For example, TSA officials decided to allow passengers to carry small scissors and tools onto aircraft based on their review of threat information, which indicated that these items do not pose a high risk to the aviation system. However, TSA did not conduct the necessary analysis of data it collected to assess whether this screening change would free up TSOs to focus on screening for high-risk threats, as intended. TSA officials acknowledged the importance of evaluating whether proposed screening procedures would achieve their intended purpose, but cited difficulties in doing so, including time pressures to implement needed security measures quickly. Finally, TSA's documentation on proposed modifications to screening procedures was not complete. TSA documented the basis--that is, the information, experience, or event that encouraged TSA officials to propose the modifications--for 72 of the 92 proposed modifications. In addition, TSA documented the reasoning behind its decisions for half (26 of 44) of the proposed modifications that were not implemented. Without more complete documentation, TSA may not be able to justify key modifications to passenger screening procedures to Congress and the traveling public. TSA monitors TSO compliance with passenger checkpoint screening procedures through its performance accountability and standards system and through covert testing. Compliance assessments include quarterly observations of TSOs' ability to perform particular screening functions in the operating environment, quarterly quizzes to assess TSOs' knowledge of procedures, and an annual knowledge and skills assessment. TSA uses covert tests to evaluate, in part, the extent to which TSOs' noncompliance with procedures affects their ability to detect simulated threat items hidden in accessible property or concealed on a person. TSA airport officials have experienced resource challenges in implementing these compliance monitoring methods. TSA headquarters officials stated that they are taking steps to address these challenges. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
TVA is a multipurpose, independent, wholly owned federal corporation established by the Tennessee Valley Authority Act of 1933 (TVA Act). The TVA Act established TVA to improve the quality of life in the Tennessee River Valley by improving navigation, promoting regional agricultural and economic development, and controlling the floodwaters of the Tennessee River. To those ends, TVA erected dams and hydroelectric power facilities on the Tennessee River and its tributaries. To meet the need for more electric power during World War II, TVA expanded beyond hydropower, building coal-fired power plants. In the 1960s, TVA decided to add nuclear generating units to its power system. Today, TVA operates one of the nation’s largest power systems, having produced about 152 billion kilowatt-hours (kWh) of electricity in fiscal year 2000. The system consists primarily of 113 hydroelectric units, 59 coal-fired units, and 5 operating nuclear units. TVA sells power in seven states—Alabama, Georgia, Kentucky, Mississippi, North Carolina, Tennessee, and Virginia. TVA sells power at wholesale rates to 158 municipal and cooperative utilities that, in turn, distribute the power on a retail basis to nearly 8 million people in an 80,000 square mile region. TVA also sells power to a number of directly served large industrial customers and federal agencies. In 1959, the Congress amended the TVA Act to authorize TVA to use debt financing to pay for capital improvements for power programs. Under this legislation, the Congress required that TVA’s power program be “self- financing” through revenues from electricity sales. For capital needs in excess of internally generated funds, TVA was authorized to borrow by issuing bonds. TVA’s debt limit is set by the Congress and was initially established at $750 million in 1959. Since then, TVA’s debt limit has been increased four times by the Congress: to $1.75 billion in 1966, $5 billion in 1970, $15 billion in 1975, and $30 billion in 1979. As of September 30, 2000, TVA’s outstanding debt was $26.0 billion. TVA’s bonds are considered “government securities” for purposes of the Securities and Exchange Act of 1934 and are exempt from registration under the Securities Act of 1933. All of TVA’s bonds are publicly held, and several are traded on the bond market of the New York Stock Exchange. Since TVA’s first public issue in 1960, Moody’s Investors Service and Standard & Poor’s have assigned TVA’s bonds their highest credit rating—Aaa/AAA. To determine whether TVA’s bonds are explicitly or implicitly guaranteed by the federal government, we analyzed various documents, including Section 15d of the TVA Act, as amended, the Basic TVA Power Bond Resolution, TVA’s Information Statement, and the language included in TVA’s bond offering circulars. We also discussed this issue with bond analysts at two credit rating firms (Moody’s Investors Service and Standard & Poor’s) and TVA officials. To determine the opinion of bond analysts regarding the effect of an implicit or explicit guarantee on TVA’s bonds, we interviewed officials at two credit rating firms that rate TVA’s bonds to discuss their rating methodology for TVA and other electric utilities’ bonds. In addition, we reviewed recent reports issued by the credit rating agencies for any language about an implicit federal guarantee of TVA’s debt. As agreed with your offices, we did not attempt to determine what TVA’s bond rating would be without its ties to the federal government as a wholly owned government corporation. To determine the impact of TVA’s bond rating on its annual interest expense, we obtained information from TVA about its outstanding bonds as of September 30, 2000. We then obtained comparable information on the average bond ratings and bond yield rates applicable to public utilities for the various bond rating categories. Using the average bond yield rates for public utility debt in the various bond rating categories, we used two approaches to estimate the amount of TVA’s annual interest expense if its bonds outstanding at September 30, 2000, carried the lower ratings. Additional information on our scope and methodology is contained in appendix I. We conducted our review from July 2000 through April 2001 in accordance with generally accepted government auditing standards. We requested written comments from TVA on a draft of this report. TVA’s Chief Financial Officer provided us with oral comments, which we incorporated, as appropriate. The TVA Act states that the federal government does not guarantee the principal of, or interest on, TVA’s bonds. However, the perception of the bond analysts at the two credit rating firms we contacted is that since TVA is a wholly owned government corporation, the federal government would support debt service and would not allow a default to occur. Both of the credit rating firms stated that this perception of an implicit federal guarantee is one of the primary reasons that TVA’s bonds have received the highest credit rating. One of the firms cited two other factors—TVA’s legislative protections from competition and its strong operational performance—as additional reasons for assigning TVA’s bonds its highest rating. The TVA Act specifically states that the federal government does not guarantee TVA bonds. TVA includes similar “no federal guarantee” language in its Basic TVA Power Bond Resolution, Information Statement, and bond offering circulars. The relevant language is as follows: Section 15d of the TVA Act, as amended, 16 USC § 831n-4—“Bonds issued by the Corporation hereunder shall not be obligations of, nor shall payment of the principal thereof or interest thereon be guaranteed by, the United States.” Basic TVA Power Bond Resolution, Section 2.2 Authorization and Issuance of Bonds—“They shall be payable as to both principal and interest solely from Net Power Proceeds and shall not be obligations of or guaranteed by the United States of America.” Information Statement—“Evidences of Indebtedness are not obligations of the United States of America, and the United States of America does not guarantee the payment of the principal of or interest on any Evidences of Indebtedness.” TVA bond offering circulars—“The interest and principal on the Bonds are payable solely from Net Power Proceeds and are not obligations of, or guaranteed by, the United States of America.” Although TVA’s bonds expressly disclaim a federal guarantee, the two bond rating firms we contacted perceive TVA’s bonds to be implicitly backed by the federal government. This perception of an implied federal guarantee is one of the primary reasons that TVA’s bonds have received the highest credit rating. For example, Standard & Poor’s, in its January 2001 analysis of TVA’s global power bonds, stated that “the rating reflects the US government’s implicit support of TVA and Standard & Poor’s view that, without a binding legal obligation, the federal government will support principal and interest payments on certain debt issued by entities created by Congress.” Further, in its June 2000 opinion update on TVA, Moody’s Investors Service (Moody’s) reported that “the Aaa rating on Tennessee Valley Authority (TVA) power bonds derives from its strong operational performance and its status as a wholly owned corporate agency of the US Government.” In addition, Moody’s reported that although the federal government does not guarantee TVA’s bonds, the government would not allow a default on TVA’s debt because of the impact it would have on the cost of debt issued by government-sponsored enterprises, such as Fannie Mae and Freddie Mac.6, 7 As in the case of TVA, the government does not guarantee the debt of these enterprises. Also as with TVA, there is a perception in the investment community that the federal government would not allow these enterprises to default on their obligations. In its January 2001 analysis of TVA’s global power bonds, Standard & Poor’s acknowledged that its rating of these bonds did not reflect TVA’s underlying business or financial condition and that the rating of these bonds would have been lower without TVA’s ties to the federal government. In addition, a Moody’s official stated that financial statistics and ratios for other electric utilities are significantly stronger than those for TVA in each rating category and that government ownership was a fundamental underpinning of the Aaa rating it assigned to TVA’s debt. Moody’s and Standard & Poor’s generally use a complex methodology involving both quantitative and qualitative analyses when determining ratings for electric utilities. For example, Moody’s examines the volatility and reliability of cash flows, the contributions of the utility to the profits of its corporate parent (if any), and how the utility is positioning itself to operate in a competitive environment. Also included in Moody’s analysis is the utility’s ability to balance business and financial risk with performance. Similarly, Standard & Poor’s measures financial strength by a utility’s ability to generate consistent cash flow to service its debt, finance its operations, and fund its investments. In addition, Standard & Poor’s analyzes business risk by examining the utility’s operating characteristics such as regulatory environment, reliability, and management. Opinion Update: Tennessee Valley Authority, Moody’s Investors Service, June 22, 2000. Government-sponsored enterprises are federally established, privately owned corporations designed to increase the flow of credit to specific economic sectors. categories, using A, B, and C, with Aaa/AAA being the highest rating. Triple, double, and single characters distinguish the gradations of credit/investment quality. For example, issuers rated Aaa/AAA indicate exceptional financial security, Baa/BBB indicate adequate financial security, and Ba/BB or below offer questionable to poor financial security. Debt issues rated in the four highest categories, Aaa/AAA, Aa/AA, A, and Baa/BBB, generally are recognized as investment-grade. Table 1 describes the investment-grade rating categories used by Moody’s and Standard & Poor’s. Debt rated Ba/BB or below generally is referred to as speculative grade. In addition, Moody’s applies numerical modifiers, 1, 2, and 3, and Standard & Poor’s uses “plus” and “minus” signs in each rating category from Aa/AA through Caa/CCC in their corporate bond rating system. The modifier 1 and “plus” indicate that the issuer/obligation ranks in the higher end of a rating category; 3 and “minus” indicates a ranking in the lower end. According to a Moody’s official, the firm places less significance on financial factors in analyzing TVA debt than in analyzing the debt of other electric utilities. Because of TVA’s ties to the federal government, Moody’s considers other factors more important in its assessment of TVA. Specifically, Moody’s looks at how TVA will react to its changing operating environment and places “considerable value” on the legislative framework in which TVA operates. For example, in its June 2000 analysis of TVA, Moody’s reported that key provisions in the TVA Act and the Energy Policy Act of 1992 (EPAct) provide credit protection for bondholders. Under the TVA Act, TVA’s Board of Directors is required to set rates at levels sufficient to generate revenues to cover operating and financing costs. EPAct provides TVA with certain protections from competition. Under EPAct, TVA is exempt from having to allow other utilities to use its transmission lines to transmit power to customers within TVA’s service territory. Further, the Moody’s official stated, as long as TVA is able to set its own rates and to benefit from legislative and other competitive advantages over other utilities, Moody’s will continue to assign TVA’s bonds a Aaa rating. As shown in figure 1, of the 119 electric utilities rated by Moody’s as of October 2000, TVA was the only utility rated Aaa. The ratings of other electric utilities range from a high of Aa1 to a low of Ba2, with an average rating at A3. Figure 1 shows the number of utilities in each rating category compared to TVA. As noted previously, the TVA Act authorizes TVA to issue and sell bonds to assist in financing its power program. Investor-owned electric utilities also use debt financing, but unlike TVA, they can and do issue common and preferred stock to finance capital needs. Figure 2 shows the capital structure of electric utilities by rating category. It also shows that, in general, electric utilities that have obtained a greater portion of financing through debt have lower credit ratings. However, even though the capital structure of TVA consists entirely of debt, and, as illustrated in our February 2001 report, it has higher fixed financing costs and less financial flexibility than its likely competitors, TVA remains the only AAA-rated electric utility in the United States. As a result of TVA’s high bond ratings, the private lending market has provided TVA with access to billions of dollars of financing at low interest rates, an advantage that in turn results in lower interest expense than if its rating had been lower. To determine the impact of TVA’s bond rating on its interest expense, we estimated what TVA’s annual interest expense on its bonds outstanding at September 30, 2000, would have been if the debt had been given lower investment-grade ratings. Using two different methodologies, we obtained similar results. In the first methodology, we compared the coupon rate of each of TVA’s bonds outstanding at September 30, 2000, to the average bond yield rates applicable to public utility bonds with similar terms at the time of issuance for each investment-grade rating category. For example, TVA’s Aaa-rated 2000 Series E Power Bonds that were outstanding at September 30, 2000, have a coupon rate of 7.75 percent. When these bonds were issued on February 16, 2000, the average bond yields for public utility debt averaged 8.16 percent. In total, using the first methodology, we found that the annual interest expense of TVA’s bonds outstanding at September 30, 2000, would have been between $137 million and $235 million (about 2 to 3 percent of fiscal year 2000 total expenses) higher if the debt had been given lower investment-grade bond ratings. In the second methodology, we categorized TVA’s bonds into long-term (at least 20 years to maturity at time of issuance) and intermediate-term (less than 20 years to maturity at time of issuance) debt issues. We then identified the difference between TVA’s average coupon interest rates grouped as long-term and intermediate-term on its bonds outstanding at September 30, 2000, and the average bond yield rates grouped as long-term and intermediate-term for public utilities for the various investment-grade rating categories. Specifically, we compared the average coupon interest rate on TVA’s long-term bonds to the 9-year (1992–2000) average bond yield rates for long-term public utility bonds. Similarly, we compared the average coupon interest rate on TVA’s intermediate-term bonds to the 5-year (1996–2000) average bond yield rates for intermediate-term public utility bonds. The years used (maturities and time of issuance) for public utility long-term and intermediate-term debt are, in general, comparable to TVA’s bonds outstanding at September 30, 2000. For example, the average coupon interest rate for TVA’s bonds outstanding at September 30, 2000, with at least 20 years to maturity at time of issuance was 6.96 percent. In comparison, the average bond yield rates for the period 1992–2000 for public utility debt with at least 20 years to maturity averaged 7.82 percent. Using this methodology, we estimated that the annual interest expense on TVA’s bonds outstanding at September 30, 2000, would have been about $141 million to $245 million (about 2 to 4 percent of fiscal year 2000 total expenses) higher if its bonds had been rated lower. Table 2 shows the impact of lower bond ratings on annual interest expense using both methodologies. It is important to note that our analyses assumed that TVA’s coupon rates on its bonds corresponded to the bond yield rates of other lower-rated public utilities at the time TVA issued its bonds. Assuming that were the case, we estimated that TVA’s interest expense would have been higher by the amounts shown in table 2. If TVA’s debt were no longer perceived to be implicitly guaranteed by the federal government, the resulting impact on TVA’s interest expense would relate to future bonds and refinancings rather than to its bonds outstanding at September 30, 2000. TVA’s high bond rating results in lower interest expense, enhancing TVA’s competitive prospects by providing it with more financial flexibility to respond to financial or competitive challenges. While the criteria used to rate the bonds of TVA and other electric utilities are the same, they are weighted differently and, as a result, the basis for TVA’s bond rating is more nonfinancial in nature than that for other electric utilities. According to bond analysts, TVA’s high bond rating is largely based on the perception that its debt is federally backed because of its ties to the federal government as a wholly owned government corporation and its legislative protections from competition. If these conditions were to change, TVA’s bond rating would likely be lowered, which in turn would affect the cost of new debt. This would add to its already high interest expense and corresponding financial challenges in a competitive market. TVA’s Chief Financial Officer generally agreed with the report and provided oral technical and clarifying comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 7 days from its date. At that time, we will send copies of this report to appropriate House and Senate Committees; interested Members of Congress; TVA’s Board of Directors; The Honorable Spencer Abraham, Secretary of Energy; The Honorable Mitchell E. Daniels, Jr., Director, Office of Management and Budget; and other interested parties. The report will also be on GAO’s home page at http://www.gao.gov. We will make copies available to others upon request. Please call me at (202) 512-9508 if you or your staffs have any questions. Major contributors to this report are listed in appendix II. We were asked to answer specific questions regarding TVA’s financial condition. This report addresses the questions pertaining to TVA’s bond rating; specifically, (1) whether TVA’s bonds are explicitly or implicitly guaranteed by the federal government, including the opinion of bond analysts regarding the effect of any such guarantee, and (2) the impact of TVA’s bond rating on its annual interest expense. As agreed with your offices, we issued a separate report on February 28, 2001, on the three other issues regarding TVA’s (1) debt and deferred assets, (2) financial condition compared to its likely competitors, and (3) potential stranded costs. To determine whether TVA’s bonds are explicitly or implicitly guaranteed by the federal government, we reviewed prior GAO products discussing TVA’s bonds; reviewed and analyzed the section of the TVA Act pertaining to TVA’s bonds; reviewed and analyzed various TVA documents, including the Basic TVA Power Bond Resolution, TVA’s Information Statement, and the language included in TVA’s outstanding bond offerings at September 30, 2000; interviewed bond analysts at Moody’s and Standard & Poor’s; and interviewed TVA officials. To determine the opinion of bond analysts regarding the effect of any such guarantee, we interviewed officials at the credit rating firms that rate TVA’s bonds— Moody’s and Standard & Poor’s; and reviewed and analyzed documents issued by Moody’s and Standard & Poor’s on their methodology for rating TVA and other electric utilities. To determine the impact of TVA’s bond rating on its annual interest expense, we obtained information from TVA about its outstanding bonds at September 30, 2000; reconciled information from TVA about its outstanding bonds at September 30, 2000, to its audited financial statements; reviewed information pertaining to TVA’s outstanding debt contained in its annual reports; reviewed a report issued by the Department of Energy’s Energy Information Administration which assessed the impact of TVA’s bond rating on its interest expense; interviewed Moody’s regarding the availability of historical bond yield data by rating category for electric utilities and public utilities; obtained Moody’s information on the average bond yields applicable to public utilities in the various bond rating categories from Standard & Poor’s DRI (long-term) and Moody’s Investors Service Credit Perspectives (intermediate-term); and estimated the additional annual interest expense on TVA’s bonds outstanding at September 30, 2000, using the average bond yield rates for public utilities in various investment-grade rating categories. Using Moody’s public utility long-term and intermediate-term (unweighted) bond yield data in various investment-grade rating categories, we applied two methods for estimating what the additional annual interest expense on TVA’s bonds outstanding at September 30, 2000, would have been if TVA’s debt were rated lower. Our analysis considered the characteristics of TVA’s bonds, such as date of issuance and term; however, we did not assess the effect of call provisions. Under Methodology 1, we analyzed TVA’s annual interest expense on its bonds outstanding at September 30, 2000, to determine, for each issuance outstanding, the (1) coupon rate, (2) date of issuance, (3) term, and (4) maturity; identified the average bond yield rates applicable to public utility bonds with similar terms at the time of issuance of each of TVA’s bonds outstanding at September 30, 2000, in the Aa/AA, A, and Baa/BBB rating categories; calculated the annual interest expense for each of TVA’s debt issues in the various rating categories; and determined the estimated additional annual interest expense by taking the difference between TVA’s annual interest expense and the interest expense in the various rating categories. Under Methodology 2, we categorized TVA’s bonds into long-term (at least 20 years to maturity at time of issuance) and intermediate-term (less than 20 years to maturity at time of issuance); calculated TVA’s (unweighted) average coupon interest rates for long-term and intermediate-term debt by taking the average of the coupon rates applicable for each category (long-term and intermediate-term) of TVA’s bonds outstanding at September 30, 2000; calculated the annual interest expense for TVA’s long-term and intermediate-term debt using the average coupon interest rates calculated for each category; determined the (unweighted) average public utility bond yield rates for calendar years 1992 to 2000 in each of the various rating categories for long-term debt and 1996 to 2000 for intermediate-term debt, which, in general, are comparable to the maturities and time of issuance of TVA’s bonds outstanding at September 30, 2000; calculated the annual interest expense for TVA’s long-term and intermediate-term debt using the average public utility bond yield rates applicable to the various rating categories; and determined the estimated additional annual interest expense (long-term and intermediate-term) by taking the difference between TVA’s annual interest expense and the interest expense in the various rating categories. We conducted our review from July 2000 through April 2001 in accordance with generally accepted government auditing standards. We obtained our information on public utility bond yield rates from authoritative sources (e.g., Standard & Poor’s DRI, Moody’s Investors Service) that provide and/or regularly use that data; however, we did not verify the accuracy of the bond yield data they provided. During the course of our work, we contacted the following organizations. In addition to the individual named above, Richard Cambosos, Philip Farah, Jeff Jacobson, Joseph D. Kile, Mary B. Merrill, Donald R. Neff, Patricia B. Petersen, and Maria Zacharias made key contributions to this report. | Although the criteria used to rate the bonds of the Tennessee Valley Authority (TVA) and other electric utilities are the same, they are weighted differently and, as a result, the basis for TVA's bond rating is more nonfinancial in nature than that for other electric utilities. According to bond analysts, TVA's high bond rating is largely based on the perception that its debt is federally backed because of its ties to the federal government as a wholly owned government corporation and its legislative protections from competition. If these conditions were to change, TVA's bond rating would likely be lowered, which, in turn, would affect the cost of new debt. This would add to its already high interest expense and corresponding financial challenges in a competitive market. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The federal government intervention and involvement in the financial markets was created through a number of existing and recently enacted laws. This legal framework provided the financial resources for assistance, the federal government’s authorities, and the restrictions companies were required to comply with in exchange for the financial assistance. In assisting the public to understand its involvement in the companies, in May 2009 the administration published a set of core principles that are to guide the government’s management of ownership interests in private firms. Most of the institutions that the government had or has an ownership interest in are regulated by one of several financial regulators, which have a role in overseeing the financial condition and operations of its regulated entities. The federal government’s efforts in late 2008 to stabilize the financial markets are not its first intervention in private markets during economic downturns. The government has previously undertaken large-scale financial assistance efforts, including to private companies. For example, in the 1970s and early 1980s Congress created separate financial assistance programs totaling more than $12 billion to stabilize Conrail, Lockheed- Martin, and Chrysler, with most of the funds being distributed in the form of loans or loan guarantees. Most recently, in response to the most severe financial crisis since the Great Depression, Congress provided Treasury additional authority to stabilize the financial system. In particular: In July 2008, Congress passed the Housing and Economic Recovery Act of 2008 (HERA), which established FHFA—the agency responsible for the monitoring of safety and soundness and the housing missions of the Enterprises and the other housing government-sponsored enterprises, namely, the Federal Home Loan Banks—and among other things, provided for expanded authority to place the Enterprises in conservatorship or receivership and provides Treasury with certain authorities to provide financial support to the Enterprises. In accordance with HERA, on September 6, 2008, FHFA placed the Enterprises into conservatorship because of concern that their deteriorating financial condition ($5.4 trillion in outstanding obligations) would destabilize the financial system. The goals of the conservatorships are to preserve and conserve the assets and property of the Enterprises and enhance their ability to fulfill their missions. FHFA has the authority to manage the Enterprises and maintains the powers of the board of directors, officers, and shareholders. Treasury agreed to provide substantial financial support so that Enterprises could continue as going concerns to support mortgage financing, subsequently, the Federal Reserve Board committed to a variety of activities, including purchasing substantial amounts of their debt and securities to support housing finance, housing markets, and the financial markets more generally. In October 2008, Congress passed EESA, which authorized the creation of TARP to, among other things, buy up to $700 billion in troubled assets, such as mortgage-backed securities and any other financial instrument that the Secretary of the Treasury, in consultation with the Chairman of the Federal Reserve Board, determined that it needed to purchase to help stabilize the financial system. EESA created OFS within Treasury to administer TARP, which comprises a number of programs that were designed to address various aspects of the unfolding financial crisis. Early in the program, Treasury determined that providing capital infusions would be the fastest and most effective way to address the crisis. In return for these capital infusions, Treasury received equity in the hundreds of companies that have participated in the program. In return for receiving these capital infusions, TARP-recipients were subject to certain requirements and restrictions, such as dividend requirements and limits on executive compensation. The American Recovery and Reinvestment Act of 2009 (Recovery Act) amended and expanded EESA’s executive compensation provisions and directed Treasury to require appropriate standards for executive compensation and corporate governance of TARP recipients. On June 10, 2009, Treasury adopted an interim final rule to implement the law for executive compensation and corporate governance, including limits on compensation, providing guidance on the executive compensation and corporate governance provisions of EESA, and setting forth certain additional standards pursuant to authority under EESA. The requirements for executive compensation generally include: (1) limits on compensation that exclude incentives for senior executive officers to take unnecessary and excessive risks that threaten the value of TARP recipients; (2) provision for the recovery of any bonus, retention award, or incentive compensation paid to certain executives based on materially inaccurate statements of earnings, revenues, gains, or other criteria; (3) prohibition on “golden parachute” payments accrued to certain executives; (4) prohibition on payment or accrual of bonuses, retention awards, or incentive compensation to certain executives; and (5) prohibition on employee compensation plans that would encourage manipulation of earnings reported by TARP recipients to enhance employees’ compensation. The regulation required the establishment of Office of the Special Master for TARP Executive Compensation (Special Master) to review the compensation payments and structures of TARP recipients of “exceptional financial assistance,” which includes all of the companies in our study with the exception of the government-sponsored Enterprises. The Senior Preferred Stock Agreements between Treasury and the Enterprises negotiated prior to EESA and the Recovery Act included a requirement that FHFA consult with Treasury relating to executive compensation. A number of programs under TARP—designed to help stabilize institutions and financial markets—have resulted in Treasury having an ownership interest in such institutions. The Capital Purchase Program (CPP) is the largest TARP program and at its peak had more than 700 participants, including Bank of America and Citigroup. Created in October 2008, it aimed to stabilize the financial system by providing capital to viable banks through the purchase of preferred shares and subordinated debentures. These transactions generally provide that the banks pay fixed dividends on the preferred shares, that the debentures accrue interest, and that the banks issue a warrant to purchase common stock, preferred shares, or additional senior debt instruments. The Targeted Investment Program (TIP), established in December 2008, was designed to prevent a loss of confidence in financial institutions that could (1) result in significant market disruptions, (2) threaten the financial strength of similarly situated financial institutions, (3) impair broader financial markets, and (4) undermine the overall economy. Treasury determined the forms, terms, and conditions of any investments made under this program and considered the institutions for approval on a case- by-case basis. Treasury required participating institutions to provide warrants or alternative considerations, as necessary, to minimize the long- term costs and maximize the benefits to the taxpayers, in accordance with EESA. Only two institutions participated in TIP, Bank of America and Citigroup, and both repurchased their preferred shares and trust preferred shares, respectively, from Treasury in December 2009. Treasury has terminated the program. The Asset Guarantee Program (AGP), was created in November 2008 to provide a federal government guarantee for assets held by financial institutions that had been deemed critical to the functioning of the U.S. financial system. The goal of AGP was to encourage investors to keep funds in the institutions. According to Treasury, placing guarantee assurances against distressed or illiquid assets was viewed as another way to help stabilize the financial system. In implementing AGP, Treasury collected a premium on the risk assumed by the government that was paid in preferred shares that were exchanged later for trust preferred shares. Citigroup terminated its participation on December 23, 2009. Treasury has since terminated AGP. While the asset guarantee was in place, no losses were claimed by Citigroup and no federal funds were paid out. The AIG Investment Program—originally called the Systemically Significant Failing Institutions Program (SSFI)—was created in November 2008 to help avoid disruptions to financial markets from an institutional failure that Treasury determined would have broad ramifications for other institutions and market activities. AIG has been the only participant in this program and was provided the assistance because of its systemic importance to the financial system. The assistance provided under this program is reflected in the securities purchase agreements, which required Treasury to purchase preferred shares from AIG and entitles Treasury to dividends declared by AIG on these preferred shares and provide warrants to purchase common stock. The Automotive Industry Financing Program (AIFP) was created in December 2008 to prevent a significant disruption to the U.S. automotive industry. Treasury determined that such a disruption would pose a systemic risk to financial market stability and have a negative effect on the U.S. economy. The program was authorized to provide funding to support automakers during restructuring, to ensure that auto suppliers to Chrysler and GM received compensation for their services and products, and to support automotive finance companies. AIFP provided sizeable loans to Chrysler and GM (including a loan to GM that was convertible into shares of GMAC that were purchased with the proceeds). Treasury loaned up to $1.5 billion to Chrysler Financial, which was fully repaid on July 14, 2009. Ultimately the government obtained an equity stake through the restructurings and loan conversion. The Capital Assistance Program (CAP), established in February 2009, was designed to help ensure that qualified financial institutions have sufficient capital to withstand severe economic challenges. These institutions were required to meet eligibility requirements substantially similar to those used for CPP. A key component of CAP was the Supervisory Capital Assessment Program (SCAP), under which federal bank regulators, led by the Federal Reserve, conducted capital assessments, or “stress tests,” of large financial institutions. Participation in SCAP was mandatory for the 19 largest U.S. bank holding companies (those with risk-weighted assets of $100 billion or more as of December 31, 2008). The tests were designed to determine whether these companies had enough capital to absorb losses and continue lending even if economic and market conditions were worse than expected between December 2008 and December 2010. Institutions deemed not to have sufficient capital were given 6 months to raise private capital. In conjunction with the test, Treasury announced that it would provide capital through CAP to banks that needed additional capital but were unable to raise it through private sources. GMAC was the only institution determined to need additional capital assistance from Treasury. GMAC received the additional capital assistance through AIFP on December 30, 2009. Treasury announced the closure of CAP, on November 9, 2009. In addition to loans and guarantees, Treasury purchased or received various types of equity investments, ranging from common stock to subordinated debentures and warrants. Recognizing the challenges associated with the federal government having an ownership interest in the private market, the administration developed several guiding principles for managing its TARP investments. According to the principles issued in March 2009, the government will: Act as a reluctant shareholder. The government has no desire to own equity stakes in companies any longer than necessary and will seek to dispose of its ownership interests as soon as practical. The goal is to promote strong and viable companies that can quickly be profitable and contribute to economic growth and jobs without government involvement. Reserve the right to set up-front conditions. The government has the right to set up-front conditions to protect taxpayers, promote financial stability, and encourage growth. These conditions may include restructurings as well as changes to ensure a strong board of directors that selects management with a sound long-term vision to restore their companies to profitability and to end the need for government support as quickly as is practically feasible. Not interfere in the day-to-day management decisions of a company in which it is an investor. The government will not interfere with or exert control over day-to-day company operations. No government employees will serve on the boards or be employed by these companies. Exercise limited voting rights. As a common shareholder, the government will vote on only core governance issues, including the selection of a company’s board of directors and major corporate events or transactions. While protecting taxpayer resources, the government has said that it intends to be extremely disciplined as to how it uses even these limited rights. Federal financial regulators—Federal Reserve, FHFA, FDIC, OCC, and Office of Thrift Supervision—play a key role in regulating and monitoring financial institutions, including most of the institutions that received exceptional amounts of financial assistance. Because Bank of America, Citigroup, the Enterprises, and GMAC are all regulated financial institutions, not only were they monitored by Treasury as an investor but they continued to be regulated and overseen by their primary federal regulator. Specifically, the Federal Reserve oversees bank holding companies—including Bank of America, Citigroup, and GMAC—to help ensure their financial solvency. As regulated institutions, Bank of America, Citigroup, and GMAC were subject to ongoing oversight and monitoring before they received any government financial assistance and will continue to be regulated and supervised by their regulator after the assistance has been repaid. FHFA regulates and supervises the Enterprises and established their conservatorships in 2008. The Federal Reserve’s program for supervising large, complex banking organizations is based on a “continuous supervision” model that assigns a team of examiners dedicated to each institution and headed by a central point of contact. The Federal Reserve regularly rates the bank holding company’s operations, including its governance structure. Throughout the crisis, staff dedicated to the largest institutions have increased, as has the oversight and involvement in supervising the financial condition and operations of the institutions. In addition to its bank holding company regulatory and supervisory responsibilities, the Federal Reserve conducts the nation’s monetary policy by influencing the monetary and credit condition in the economy in pursuit of maximum employment, stable prices, and moderate long-term interest rates. Also, under unusual and exigent circumstances, the Federal Reserve has emergency authorization to assist a financial firm that is not a depository institution. The Federal Reserve used this authority to help address the recent financial crisis, which also resulted in the government acquiring an ownership interest in AIG. Subsidiary banks of Bank of America, Citigroup, and GMAC are supervised by other federal regulators, including OCC and FDIC. For example, OCC supervises Citibank—Citigroup’s national bank. In addition, FDIC oversees the banks’ condition and operations to gauge their threat to the deposit insurance fund. It also is the primary federal supervisor of GMAC’s bank. These bank supervisors generally use the same framework to examine banks for safety and soundness and compliance with applicable laws and regulations. As described above, they examine most aspects of the bank’s financial condition, including the bank’s management. Finally, FHFA was created in 2008 to oversee the housing enterprises, Fannie Mae and Freddie Mac. It replaced the Office of Federal Housing Enterprise Oversight and the Federal Housing Finance Board, and the Department of Housing and Urban Development’s mission authority was transferred to FHFA. The Enterprises are chartered by Congress as for- profit, shareholder-owned corporations, now currently under federal conservatorship. Using a risk-based supervisory approach, FHFA examines the Enterprises, including their corporate governance and financial condition. The federal government’s equity interest was acquired in a variety of ways and resulted from assistance aimed at stabilizing markets or market segments. Moreover, the government’s equity interest in the companies varies from company to company—ranging from preferred shares to common shares. In some cases, the government acquired an equity interest when it cancelled outstanding loans in exchange for common shares of the debtor. As of June 1, 2010, the government held an equity ownership interest in the form of preferred or common shares in the five major corporations—AIG, Chrysler, Citigroup, GM, GMAC—and the Enterprises. As shown in figure 1, the government holds the largest share of common stock in GM, but it also holds significant common stock in GMAC and smaller amounts, in terms of percentage, of Citigroup and Chrysler. It holds significant amounts of preferred shares, convertible preferred shares, or warrants for common shares in AIG and the Enterprises, as a result of the assistance provided. Treasury provided funds to Bank of America and the Enterprises in exchange for preferred stock with no voting rights except in limited circumstances, giving the federal government an equity interest in these companies. Specifically, the government’s $45 billion investment in Bank of America—which participated in CPP and TIP—gave Treasury ownership of nonvoting preferred shares in the company. Bank of America received $25 billion in CPP funds and $20 billion in TIP funds. The transactions were consummated pursuant to a securities purchase agreement, and the terms of the preferred shares acquired by Treasury included the right to payment of fixed dividends and no voting rights except in limited circumstances. On December 9, 2009, Bank of America repurchased all of its preferred shares previously issued to Treasury, ending the company’s participation in TARP. The company, as required, also paid over $2.7 billion in dividends to Treasury. On March 3, 2010, Treasury auctioned its Bank of America warrants for $1.54 billion. On September 6, 2008, when FHFA placed the Enterprises into conservatorships, Treasury provided financial assistance in consideration of equity interest. Under the transaction agreements, the Enterprises immediately issued to Treasury an aggregate of $1 billion of senior preferred stock and warrants to purchase common stock. The warrants allow Treasury to buy up to 79.9 percent of each entity’s common stock, can be exercised at any time, and are intended to help the government recover some of its investments if the Enterprises become financially viable. Under the terms of the preferred shares, Treasury is to receive dividends on the Enterprises’ senior preferred shares at 10 percent per year and, beginning March 31, 2010, quarterly commitment fees from the enterprises that have not yet been implemented. Further, the preferred share terms include restrictions on the Enterprises’ authority to pay dividends on junior classes of equity, issue new stock, or dispose of assets. At the end of the first quarter 2010, Treasury had purchased approximately $61.3 billion in Freddie Mac preferred stock and $83.6 billion in Fannie Mae preferred stock to cover losses. Because of the continued deteriorating financial condition of the Enterprises, the amount of government assistance to them is likely to increase. The government’s most substantive role is as conservator of the Enterprises, which is discussed later. Treasury has provided funds and other financial assistance to Citigroup, GMAC, GM, and Chrysler in exchange for common shares with voting rights, giving the federal government an equity stake in these companies. For Citigroup and GMAC, the common stock strengthened their capital structure, because the markets view common equity more favorably than preferred shares. Initially, Treasury invested $25 billion in Citigroup under CPP and an additional $20 billion under TIP. Treasury also entered into a loss sharing arrangement with Citigroup on approximately $301 billion of assets under AGP under which Treasury assumed $5 billion of exposure following Citigroup’s first losses of $39.5 billion. In exchange for this assistance, Treasury received cumulative nonvoting preferred shares and warrants to purchase common shares. FDIC also received nonvoting preferred stock for its role in AGP. Citigroup subsequently requested that Treasury exchange a portion of the preferred shares held by Treasury for common shares to facilitate an exchange of privately held preferred shares for common shares. Taken together, Treasury and private exchanges improved the quality of Citigroup’s capital base and thereby strengthened its financial position. From July 2009 to September 2009, Treasury exchanged its preferred shares in Citigroup for a combination of shares of common stock and trust preferred shares, giving the government a 33.6 percent ownership interest in Citigroup. Treasury now has voting rights by virtue of its common stock ownership. On December 23, 2009, Citigroup repurchased $20 billion of trust preferred shares issued to Treasury and the Federal Reserve, FDIC, and Treasury terminated the AGP agreement. FDIC and Treasury, collectively, kept approximately $5.3 billion in trust preferred shares, including the warrants that were associated with this assistance, as payment for the asset protection provided under AGP. As of May 26, 2010, Treasury still owned almost 6.2 billion shares, or 21.4 percent, of Citigroup’s common shares and warrants. Treasury’s AIFP assistance to GMAC, a bank holding company, resulted in the government owning more than half of GMAC by the end of 2009. After GMAC received approval from the Federal Reserve to become a bank holding company in December 2008, Treasury initially purchased $5 billion of GMAC’s preferred shares and received warrants to purchase an additional $250 million in preferred shares. Treasury exercised those warrants immediately. At the same time, Treasury also agreed to lend up to $1 billion of TARP funds to GM (one of GMAC’s owners), to enable GM to purchase additional equity in GMAC. On January 16, 2009, GM borrowed $884 million under that commitment, to purchase an additional interest in GMAC. Treasury terminated the loan on May 29, 2009, by exercising its option to exchange amounts due under that loan for an equity interest in GMAC. The Federal Reserve required GMAC to raise additional capital by November 2009 in connection with SCAP. On May 21, 2009, Treasury purchased $7.5 billion of mandatory convertible preferred shares from GMAC and received warrants that Treasury exercised at closing for an additional $375 million in mandatory convertible preferred shares, which enabled GMAC to partially meet the SCAP requirements. On May 29, 2009, Treasury exercised its option to exchange its right to payment of the $884 million loan it had made to GM for 35.4 percent of the common membership interests in GMAC. Treasury officials told us that exercising the option prevented the loan from becoming part of the GM bankruptcy process and therefore, was a measure intended to protect Treasury’s investment. According to the Federal Reserve, the exercising of the option strengthened GMAC’s capital structure. In November 2009, the Federal Reserve announced that GMAC did not satisfy the SCAP requirements because it was unable to raise additional capital in the private market and was expected to meet its SCAP requirement by accessing the AIFP. On December 30, 2009, Treasury purchased an additional $1.25 billion of mandatory convertible preferred shares and received warrants that Treasury exercised at closing for an additional $62.5 million in mandatory convertible preferred shares, and further purchased $2.54 billion in GMAC trust preferred securities and received warrants that Treasury exercised at closing for an additional $127 million in GMAC trust preferred securities, which were all investments under the AIFP. Also, in December 2009, Treasury converted $3 billion of existing mandatory convertible preferred shares into common stock, increasing its equity stake from 35 percent to 56.3 percent of GMAC common stock. As of March 31, 2010, Treasury owned $11.4 billion of GMAC mandatory convertible preferred shares and almost $2.7 billion of its trust preferred securities. Treasury’s equity stake in GM and Chrysler was an outgrowth of the $62 billion it loaned to the companies under AIFP before the companies filed for bankruptcy in June and April 2009, respectively. Through the bankruptcy process, these loans were restructured into a combination of debt and equity ownership in the new companies. As a result, Treasury owns 60.8 percent of the common equity and holds $2.1 billion in preferred stock in “new GM.” Also, Treasury owns 9.9 percent of common equity in the “new” Chrysler. As a common shareholder, Treasury has voting rights in both companies. The Federal Reserve and Treasury provided funds to AIG under a series of transactions that ultimately resulted in the federal government owning preferred stock and a warrant to purchase common stock. While the Federal Reserve is not AIG’s regulator or supervisor, FRBNY assisted AIG by using its emergency authority under Section 13(3) of the Federal Reserve Act to support the government’s efforts to stabilize systemically significant financial institutions. In the fall of 2008, the Federal Reserve approved assistance to AIG by authorizing FRBNY to create a facility to lend AIG up to $85 billion to address its liquidity needs. As part of this agreement, AIG agreed to issue convertible preferred stock to a trust to be created on behalf of the U.S. Treasury (the AIG Credit Facility Trust). This was achieved through the establishment of an independent trust to manage the U.S. Treasury’s beneficial interest in Series C preferred shares that, as of April 2010, were convertible into approximately 79.9 percent of the common stock of AIG that would be outstanding after the conversion of the Series C preferred shares in full. While the Series C preferred shares initially represented 79.9 percent of the voting rights, after Treasury’s November 2009 TARP investment, the amount of Series C preferred shares voting rights to be acquired was reduced to 77.9 percent to account for the warrant to purchase 2 percent of the common shares that Treasury received in connection with that TARP investment. A June 2009 20 to 1 reverse stock split adjusted the exercise price and number of shares associated with the Treasury warrant, allowing warrants held by Treasury to become convertible into 0.1 percent common equity. Part of the outstanding debt was restructured, when as noted above, Treasury agreed to purchase $40 billion of cumulative perpetual preferred stock (Series D) and received a warrant under TARP. The proceeds were used to reduce the debt owed to FRBNY by $40 billion. To address rating agencies’ concerns about AIG’s debt-equity ratios, FRBNY and Treasury further restructured AIG’s assistance in April 2009. Treasury exchanged its outstanding cumulative perpetual preferred stock (Series D) for perpetual preferred stock (Series E), which is noncumulative and thus, more closely resembles common equity than does the Series D preferred stock. Treasury has also provided a contingent $29.8 billion Equity Capital Facility to AIG whereby AIG issued to Treasury 300,000 shares of fixed- rate, noncumulative perpetual preferred stock (Series F). As AIG draws on the contingent capital facility, the liquidation preference of those shares automatically increases by the amount drawn. AIG also issued to Treasury a warrant to purchase up to 3,000 shares of AIG common stock. As of March 2010, the government has a beneficial interest in the Series C preferred shares held by the AIG trust, which is convertible into approximately 79.8 percent of the ownership of the common shares and the trustees have voting rights with respect to the Series C preferred shares. The government decided early on that in managing its ownership interest in private companies receiving exceptional TARP assistance, it would set up certain conditions in order to protect taxpayers, promote financial stability, and encourage growth. As noted in a recent SIGTARP report, these conditions include requiring limits on or changes to the companies’ governance structure such as boards of directors, senior management, executive compensation plans, lobbying and expense policies, dividend distributions, and internal controls and submission of compliance reports. Treasury also decided early on that it would not interfere with the daily business of the companies that received exceptional assistance— that is, it would not be running these companies. However, the level of its involvement in the companies has varied depending on the role it has assumed—investor, creditor, or conservator—as a result of the assistance it has provided. Both Treasury and the federal regulators directed that strong boards of directors and qualified senior management be in place to guide the companies’ operations. Treasury designated new directors and requested that some senior executives step down from their positions at some of the companies. Using its authority as conservator, FHFA appointed new members to the boards and senior management of the Enterprises. The federal regulators requested reviews of the qualifications of senior management at two of the companies. A significant number of new directors have been elected to the governing boards of all companies that received federal assistance. Of the 92 directors currently serving on these boards, 73 were elected since November 2008 (table 2). The board of Chrysler, for instance, is made up entirely of new members, and more than half of current board members of the other companies were designated after the government provided assistance. Many of these new directors were nominated to their respective boards because it was determined that a change in leadership was required as a result of the financial crisis, while others were designated by the government and other significant shareholders as a result of their common share ownership. In addition, federal regulators also asked the boards of directors at two of the companies to assess their oversight and evaluate management depth. The assessments were submitted to the regulators, and the board of directors subsequently made changes to their composition. The terms of Treasury’s agreements with AIG and Bank of America require the expansion of the board of directors of the company, if the relevant company fails to pay the dividends to Treasury for several quarters. Treasury would then have the right to designate the directors to be elected to fill the newly created vacancies on the board. While Bank of America made the required dividend payments prior to exiting TARP, AIG did not pay its required dividends. As a result, Treasury designated two new directors for election to AIG’s board on April 1, 2010. They were subsequently re-elected at the May 12, 2010, annual shareholders meeting. The trust agreement between FRBNY and the AIG trustees also provides the trustees with authority to vote the shares held in trust to elect or remove the directors of the company. In cooperation with AIG’s board, the AIG trustees were actively involved in the recruitment of six new directors who have experience in corporate restructuring, retail branding, or financial services, and believe that these new members will help see AIG through its financial challenges. The board, in turn, has elected two additional members to replace departing board members. The trustees stated that they kept FRBNY and Treasury officials apprised of the recruitment efforts. Treasury’s common equity investment in Citigroup, GM, Chrysler, and GMAC also gives it voting rights on the election or removal of the directors of these governing boards, among other matters. In addition, the agreements with GM, Chrysler, and GMAC specifically authorize Treasury to designate directors to these companies’ boards. As authorized in a July 10, 2009, shareholder agreement with GM, Treasury, as the majority shareholder, designated 10 directors who were elected to GM’s board, 5 of whom were former directors of “old GM.” Based on the smaller number of common shares they owned in the company, two other GM shareholders—Canada GEN Investment Corporation (owned by the Canadian government) and a Voluntary Employee Beneficiary Association composed of GM’s union retirees— each designated one director. As authorized in a June 10, 2009, operating agreement with Chrysler, Treasury designated three of nine directors, who in turn, collectively elected an additional member to the board. Chrysler’s other shareholders designated the other five board members, for a total of nine directors. Chrysler’s Voluntary Employee Benefit Association appointed one director, Fiat appointed three directors, and the Canadian government appointed one director. Under the operating agreement, the number of directors that Fiat has the right to designate increases as its ownership in Chrysler increases, with a concomitant decrease in the number of directors designated by Treasury. As authorized in a May 21, 2009, governance agreement with GMAC, Treasury appointed two new directors to the board because it held 35 percent of the company’s common stock. With the conversion of $3 billion in mandatory convertible preferred shares of GMAC on December 30, 2009, Treasury’s common ownership interest increased to 56.3 percent, authorizing it to appoint two more directors. On May 26, 2010, Treasury appointed a new director to GMAC (Ally Financial Inc., formerly GMAC Financial Services). The fourth director appointment is pending. As conservator of the Enterprises, FHFA has appointed new members to the boards of directors. The Director of FHFA has statutory authority under HERA to appoint members of the board of directors for the Enterprises based on certain criteria. FHFA’s former director, at the onset of conservatorships, decided to keep three preconservatorship board members at each Enterprise in order to provide continuity and chose the remaining directors for each board. Initially, on September 16, 2008, FHFA’s former director appointed Philip A. Laskawy and John A. Koskinen to serve as new nonexecutive chairmen of the boards of directors of the Enterprises. On November 24, 2008, FHFA reconstituted the boards of directors for the Enterprises and directed their functions and authorities. FHFA’s delegation of authority to the directors became effective on December 18-19, 2008, when new board members were appointed by FHFA. The directors exercise authority and serve on behalf of the conservator, FHFA. The conservator retains the authority to withdraw its delegations to the board and to management at any time. In addition to changes in the boards of directors, the companies receiving exceptional assistance have also made a few changes to their senior management (table 3). Some of these decisions were made by the companies’ boards of directors without consultation with Treasury or federal regulators. Specifically, Bank of America, Citigroup, and GMAC executives stated that the decisions to replace their chief executive officer (CEO) or chief financial officer (CFO) were made by the companies’ boards of directors without influence from Treasury or federal regulators. However, federal regulators had directed the banks to assess their senior management’s qualifications. After receiving government assistance, Bank of America’s shareholders approved an amendment to the corporation’s bylaws prohibiting any person from concurrently serving as both the company’s chairman of the board and CEO. As a result, the shareholders elected Walter Massey to replace Kenneth Lewis as chairman of the board in April 2009. Citigroup’s board of directors also appointed a new CFO in March 2009 and again in July 2009. The AIG trustees stated that they and the Treasury officials monitoring AIG’s investments were kept apprised of the selection of Robert Benmosche to replace Edward Liddy—who was put in place as AIG’s CEO on September 18, 2008, at the request of the government to help rehabilitate the company and repay taxpayer funds—as the new CEO in August 2009. Meeting minutes provided by the AIG trustees show that the trustees and FRBNY and Treasury officials discussed the CEO search process as it was occurring. The trustees and Treasury officials also met with Benmosche before he was elected as AIG’s new CEO. According to the trustees, they encouraged the AIG board to select the most qualified CEO, but that the final decision to elect Benmosche rested with the AIG’s board of directors. GM’s selection of new senior managers during the restructuring process was directly influenced by Treasury. For example, in March 2009, Treasury’s Auto Team requested that Rick Wagoner, GM CEO at the time, be replaced by Frederick “Fritz” Henderson, then the GM president. According to a senior Treasury official, the Auto Team had determined that the senior leadership in place at that time was resistant to change. But, rather than appointing an individual outside GM to serve as CEO, the team asked Fritz Henderson to serve as the CEO to provide some continuity in the management team. Henderson resigned on December 1, 2009, but the same Treasury official said that the Auto Team did not request his removal. The GM board of directors named Ed Whitacre to replace Henderson. After the partnership between Chrysler and Fiat was completed, Sergio Marchionne (CEO of Fiat) was elected as Chrysler’s new CEO on June 10, 2009. Subsequent to his election, all changes to Chrysler’s senior management were made by new company leadership without Treasury’s involvement. As the conservator, the FHFA director has the authority to appoint senior level executives at both Enterprises. On September 7, 2008, FHFA’s former director appointed Herbert M. Allison, Jr. as President and CEO for Fannie Mae and David M. Moffett as President and CEO of Freddie Mac. Michael Williams was promoted to CEO for Fannie Mae from his Chief Operation Officer position to replace Herbert M. Allison, Jr., who became Treasury’s Assistant Secretary for Financial Stability. On March 11, 2009, FHFA appointed John A. Koskinen as Freddie Mac’s interim CEO and on July 21, 2009, Charles Haldeman was appointed CEO of Freddie Mac. As a condition of receiving assistance under TARP, recipients must adhere to the executive compensation and other requirements established under EESA and under Treasury regulations (see table 4). In addition, Treasury’s agreements with these companies included provisions requiring the companies to adopt or maintain policies regarding expenses and lobbying, report to Treasury on internal controls, certify their compliance with agreement terms, restrict the amount of executive compensation deductible for tax purposes, and limit dividend payments, among others. In prior reports, GAO and SIGTARP had reviewed Treasury’s efforts in ascertaining the companies’ compliance with the key requirements in financial assistance programs, such as CPP. GAO had recommended to Treasury that it develop a process to ensure that companies participating in CPP comply with all the CPP requirements, including those associated with limitations on dividends and stock repurchase restrictions. Overtime, Treasury addressed these issues and established a structure to better ensure compliance with the agreements. Companies must adhere to the executive compensation and corporate governance rules as a condition for receiving TARP assistance. Treasury created the Office of the Special Master to, among other things, review compensation payments and structures for certain senior executive officers and most highly compensated employees at each company receiving exceptional TARP assistance. The Special Master is charged with determining whether these payments and structures under the plans are inconsistent with the purposes of the EESA executive compensation provisions and TARP or otherwise contrary to the public interest. On October 22, 2009, the Special Master issued his first determinations with respect to compensation structures and payments for the “top 25” employees of companies receiving exceptional TARP assistance. In reviewing the payment proposals the companies submitted for 2009, the Special Master noted that the companies in some cases (1) requested excessive cash salaries, (2) proposed issuance of stock that was immediately redeemable, (3) did not sufficiently tie compensation to performance-based benchmarks, (4) did not sufficiently restrict or limit financial “perks” or curb excessive severance and executive retirement benefits, and (5) did not make sufficient effort to fold guaranteed compensation contracts into performance-based compensation. As a result, he rejected most of these initial proposals and approved a modified set of compensation structures and payments. For the 2009 top 25 compensation structures and payments, table 5 shows that the Special Master required that AIG, Bank of America, and Citigroup reduce cash compensation for their top executives by more than 90 percent from the previous year. Although Bank of America repurchased preferred shares on December 9, 2009, it agreed to remain subject to the Special Master’s determination for its top 25 employees for 2009. Similarly, Citigroup repurchased its TIP trust preferred shares on December 23, 2009, but also agreed to abide by all determinations that had been issued for 2009, including the Special Master’s requirement that Citigroup reduce its cash compensation by $244.9 million, or 96.4 percent from 2008. While Citigroup had the largest percentage cash reduction, GMAC had the largest overall reduction in total direct compensation (both cash and stock)— GMAC was required to reduce its total direct compensation by $413.3 million, or more than 85 percent of 2008 levels. Table 5 also shows that the Special Master approved a compensation structure for the most highly compensated executive at AIG that provides up to $10.5 million in total direct compensation on an annual basis. On December 11, 2009, the Special Master released his second round of determinations on executive compensation packages for companies that received exceptional TARP assistance. These determinations covered compensation structures for the “next 75” most highly compensated employees including executive officers who were not subject to the October 22, 2009, decisions. Unlike the determination for the top 25 employees, which addressed the specific amounts paid to individuals, the Special Master was required only to approve the compensation structure for this second group of employees. The determination covered four companies: AIG, Citigroup, GMAC, and GM. The Special Master also rejected most of the submitted proposals and required that they be modified to include the following features. Cash salaries generally no greater than $500,000, except in exceptional cases, as specifically certified by the company’s independent compensation committee. Limits on cash compensation in most cases to 45 percent of total compensation, with all other pay in company stock in order to align executives’ interests with long-term value creation and financial stability. In most cases, at least 50 percent of each executive’s pay be held or deferred for at least 3 years, aligning the pay each executive actually receives with the long-term value of the company. Payment of incentives only if the executive achieves objective performance targets set by the company and reviewed by the Special Master that align the executives’ interests with those of shareholders and taxpayers. Limits on total incentives for all covered executives to an aggregate fixed pool that is based on a specified percentage of eligible earnings or other metrics determined by the compensation committee and reviewed by the Special Master. A “clawback” provision covering incentive payments to covered executives that will take effect if the achievements on which the payments are based do not hold up in the long term or if an executive engages in misconduct. On March 23, 2010, the Special Master released his determinations of compensation structures and payments (for 2010) for the top 25 employees at the five remaining firms that received exceptional TARP assistance from taxpayers: AIG, Chrysler, Chrysler Financial, GM, and GMAC. Examples of his determinations include a 63 percent decrease in cash compensation from 2009 levels for AIG, 45 percent decrease for GMAC, and 7.5 percent decrease for GM executives. Chrysler’s 2010 cash salary rates for its executives remained at the same level as 2009. Similar to the determination for 2009, the Special Master approved an annual compensation structure for AIG’s highest compensated executive that provides up to $10.5 million in total direct compensation on an annual basis. Overall, the 2010 determinations included the following significant changes. On average, a 33 percent decrease in overall cash payments from 2009 levels for affected executives. On average, a 15 percent decrease in total compensation from 2009 levels for affected executives. Cash salaries frozen at $500,000 or less, unless good cause is shown. Eighteen percent of executives subject to the March 2010 determinations (21 employees) were approved for cash salary rates greater than $500,000. HERA provides the Director of FHFA, in a conservatorship, the authority to establish executive compensation parameters for both the Enterprises. On December 24, 2009, the FHFA director approved Fannie Mae and Freddie Mac 2010 compensation packages. The compensation package for each chief executive officer was established at $6 million with each package consisting of a base pay amount of $900,000, deferred pay of $3.1 million, and a long-term incentive pay of $2 million. Twelve other Fannie Mae executives and 14 other Freddie Mac executives are covered by the same system, but will receive lesser amounts. The deferred pay will be paid quarterly in 2011 to executives still at the Enterprises, and half will vary based on corporate performance. The long-term incentive pay will vary according to individual and corporate performance. Pursuant to the preferred stock purchase agreements, FHFA consulted with the Special Master for TARP Executive Compensation with regards to the 2010 compensation packages. Compensation of the executives at the Enterprises is presented in the form of cash payments. According to the Special Master and the FHFA Acting Director, compensation in the form of stock was viewed as ineffective because of the questionable value of the shares and the potential incentives stock compensation might generate to take excessive risk in hopes of making the stock valuable. In addition to executive compensation, Treasury also placed requirements pertaining to other business activities, including expense and luxury expenditures, lobbying, dividends and stock repurchases, and internal controls and compliance. For example, companies receiving exceptional assistance are required to implement and maintain an expense policy that covers the use of corporate aircraft, lease or acquisition of real estate, expenses related to office or facility renovations or relocations, expenses related to entertainment and holiday parties, hosting and sponsorship of conferences and events, travel accommodations and expenditures, and third-party consultations, among others. They are also required to implement and maintain a lobbying policy that covers lobbying of U.S. government officials, governmental ethics, and political activity. Furthermore, until Treasury no longer owns company debt or equity securities (e.g. common, preferred, and trust preferred stock), the companies may not declare or pay any dividends; make any distribution on the company’s common stock; or redeem, purchase, or acquire any of the company’s equity securities. They are also prohibited from redeeming or repurchasing any preferred or trust preferred stock from any holder unless the company offers to repurchase a ratable portion of the preferred shares then held by Treasury on the same terms and conditions, with limited exceptions. Lastly, the companies agreed to establish appropriate internal controls with respect to compliance with each of the requirements in agreement. They are required to report to Treasury on a quarterly basis regarding the implementation of those controls and their compliance with the requirements (including any instances of noncompliance). They are also required to provide signed certifications from a senior officer attesting that, to the best of his or her knowledge, such report(s) are accurate. Treasury states that it does not interfere with or exert control over certain activities of companies that received exceptional assistance. Nevertheless, SIGTARP and GAO found that the level of government involvement in the companies varied among the recipients, depending on whether Treasury and other federal entities are investors, creditors, or conservators. For example, Treasury’s involvement in Bank of America, Citigroup, and GMAC has been limited because, in exchange for its investments, Treasury—as an investor—initially received preferred shares that did not have voting rights except in certain limited circumstances, such as amendments to the company charter, in the case of certain mergers, and the election of directors to the companies’ boards in the event that dividends are not paid for several quarters. As of April 30, 2010, Treasury still held an ownership interest in Citigroup because of the June 9, 2009, agreement that exchanged Treasury’s preferred shares for common shares. Treasury’s initial investment in GMAC also came in the form of preferred shares with limited voting rights. As an up-front condition to its May 2009 investments in Chrysler and GMAC, Treasury played a central role in establishing the agreement reached between GMAC and Chrysler in April 2009 that made retail and wholesale financing available to Chrysler’s dealer network. Specifically, Treasury provided GMAC with $7.5 billion on May 21, 2009, of which $4 billion was to be used to support Chrysler’s dealers and consumers. According to Treasury officials, this agreement was part of the initial restructuring of the companies that was done under the auspices of the bankruptcy court, a situation that is quite different from the Bank of America and Citigroup investments. Senior executive officers at Bank of America, Citigroup, and GMAC agreed that Treasury was not involved in the daily operations of their companies, but they noted that the federal regulators—the Federal Reserve, FDIC, and OCC—had increased and intensified their bank examinations. The executives explained that the closer scrutiny was the result of the financial crisis, and was not directly tied to TARP assistance. GMAC’s senior officers further explained that the Federal Reserve’s involvement with their company had been due, in part, to its obtaining bank holding company status upon conversion of Ally Bank (formerly known as GMAC Bank) from an industrial loan company to a commercial bank. As a result of the conversion, GMAC has had to work closely with the Federal Reserve to establish policies, procedures, and risk management practices to meet regulatory requirements of a bank holding company. As both an investor in and creditor of AIG, GM, and Chrysler, the government has been more involved in some aspects of the companies’ operations than it has been with other companies. Treasury, FRBNY, and the AIG trustees closely interact with senior management to discuss restructuring efforts, liquidity, capital structure, asset sales, staffing concerns, management quality, and overall strategic plans for the company. Members of Treasury’s AIG team meet regularly with AIG management, attend board committee meetings, and provide input on decisions that affect the direction of the company. Similarly, FRBNY (as creditor) also attends board meetings as an observer, and FRBNY and the AIG trustees (as overseers of the AIG Trust) receive various AIG financial reports, review the quality of senior management, and provide their opinions on company strategy and major business decisions. Treasury officials continue to monitor GM and Chrysler’s strength through monthly and quarterly financial, managerial, and operations-related reports, and regular meetings with senior management, but stated that they do not micro-manage the companies. However, the government’s stated “hands-off” approach towards managing its equity interest applied only after GM and Chrysler exited bankruptcy. In the period before and during the bankruptcies, Treasury played a significant role in the companies’ overall restructuring and certain overarching business decisions. For example, Treasury issued viability determinations in which it stated that GM needed to decrease its number of brands and nameplates, and Chrysler needed to improve the quality of its vehicles. Treasury’s credit agreements with the automakers established additional requirements for the companies. For example, the companies are required to maintain their domestic production at certain levels, abstain from acquiring or leasing private passenger aircrafts, and provide quarterly reports on internal controls. Treasury officials pointed out that another reason for differences is that AIG, GM, and Chrysler are not subject to the extensive federal regulations that Bank of America, Citigroup, and GMAC, as bank holding companies, face. Moreover, officials believe that the path to exit the investments in the case of AIG, GM, Chrysler, and GMAC is more complex than in the case of Bank of America and Citigroup. Under HERA, FHFA has broad authority over the Enterprises’ operations while they are in conservatorship. The law authorizes FHFA to appoint members of the board of directors for both Enterprises based on prescribe appropriate regulations regarding the conduct of immediately succeed to all powers, privileges, and assets of the regulated Enterprises; provide for the exercise of any functions of any stockholder, officer, or director of the entity; and take any actions that may be necessary to put the entity into a solvent and operationally sound state and conserve and preserve the assets of the entity. According to FHFA officials, the agency has generally delegated significant day-to-day responsibility for running the Enterprises to the management teams that the agency has put in place for two reasons: First, FHFA has limited staff resources. Second, the Enterprises are better positioned with the expertise and infrastructure necessary to carry out daily business activities, such as the routine purchases of mortgages from lenders and securitization of such loans. At the same time, FHFA maintains its fulltime examination and supervisory programs for the Enterprises. However, FHFA, as the Enterprises’ conservator and regulator, has instituted a number of requirements, policies, and practices that involve them in the Enterprises. For example: Lobbying activities for both Enterprises have been dismantled and prohibited, and FHFA directly reviews all the Enterprises’ responses to congressional members. Officials from FHFA’s Office of Conservatorship Operations attend the board meetings and senior executive meetings at both of the Enterprises. FHFA reviews and approves performance measures for both of the Enterprises. Each Enterprise has developed scorecards with criteria that focus on safety and soundness issues while at the same time aligning loan modification goals. FHFA reviews to confirm that they have no objections to SEC filings for both of the Enterprises. The Division of Enterprise Regulation within FHFA was established by a statutory mandate within HERA to examine all functions of the Enterprises, with the exception of those explicit accounting examinations that are handled by the Office of the Chief Accountant. FHFA and Treasury work closely with the Enterprises to implement a variety of programs that respond to the dramatic downturn in housing finance markets. FHFA monitors the Enterprises’ implementation of Treasury’s Home Affordable Modification Program (HAMP). The Enterprises are acting as Treasury’s agents in implementing the program and ensuring that loan servicers comply with program requirements, with Fannie Mae as the program’s administrator and Freddie Mac as Treasury’s compliance agent for the program. FHFA has also provided advice and resources to Treasury in designing the Making Home Affordable Program. FHFA and Treasury stay in contact with the Enterprises on a daily basis about HAMP. Executives for FHFA meet with executives of both of the Enterprises on a weekly basis, and Treasury executives meet with the Enterprises’ leadership monthly. As a shareholder with respect to TARP recipients, the government has taken a variety of steps to monitor its investments in each company receiving exceptional assistance, while at the same time considering potential exit strategies. First, Treasury developed a set of guiding principles that outline its approach for monitoring investments in the companies. Second, OFS has hired asset managers to help monitor its investments in certain institutions, namely Citigroup and Bank of America. Third, Treasury’s Auto Team (or other Treasury investment professionals) manages investments in GM, Chrysler, and GMAC made under AIFP. Fourth, the Federal Reserve and FRBNY collaborate with Treasury in monitoring the Federal Reserve’s outstanding loan to and the government’s equity investments in AIG. Finally, because Treasury’s ownership in the Enterprises is not part of TARP, staff outside of OFS is responsible for monitoring these investments. Given the varied forms of ownership interest and the complexity of many of the investments, Treasury will likely have to develop a unique exit strategy for each company. The divestment process, however, is heavily dependent on company management successfully implementing strategies discussed with their regulators and Treasury. Further, external factors, such as investors demand for purchasing securities of these companies receiving exceptional assistance and broader market conditions, must be considered when implementing exit strategies. Because most of the shares are expected to either be sold in a public offering or be redeemed or repaid using funds raised in the public markets, the financial markets must be receptive to government efforts. A public offering of shares, such as those considered for AIG subsidiaries American International Assurance Company, Ltd and American Life Insurance Company emphasizes the importance of market demand. Congressional action will be needed to determine the long-term structures and exit strategies for the Enterprises. Treasury has stated that it is a reluctant shareholder in the private companies it has assisted and that it wants to divest itself of its interests as soon as is practicable. In managing these assets, Treasury has developed the following guiding principles. Protect taxpayer investment and maximize overall investment returns within competing constraints. Promote the stability of financial markets and the economy by preventing disruptions. Bolster markets’ confidence to increase private capital investment. Dispose of the investments as soon as it is practicable and in a manner that minimizes the impact on financial markets and the economy. Treasury relied on its staff and asset managers to monitor its investments in Bank of America and Citigroup. Treasury officials said that the asset managers value the investments including the preferred securities and warrants. This valuation process includes tracking the companies’ financial condition on a daily basis using credit spreads, bond prices, and other financial market data that are publicly available. Treasury also uses a number of performance indicators, including liquidity, capital levels, profit and loss, and operating metrics to monitor their financial condition. The asset managers report regularly to Treasury and provide scores that track the overall credit quality of each company using publicly available information. For the bank holding companies, Treasury monitors the values of its investments, whereas, the Federal Reserve and other regulators monitor the financial condition of these institutions as part of their role as supervisory authorities. While federal regulators routinely monitor the financial condition of the financial institutions they supervise, this oversight is separate from the monitoring Treasury engages in as an equity investor. This supervisory monitoring is related to the regulatory authority of these agencies and not to investments made under TARP. For example, bank regulators had daily contact with Bank of America, Citigroup, and GMAC as they oversee the banks activities and help ensure their safety and soundness and monitor their financial condition. This daily interaction involves discussions about the institutions’ financial condition and operations. Moreover, the Federal Reserve and OCC officials said that they do not share supervisory information with Treasury to avoid a potential conflict of interest. Rather than requiring the development of an exit strategy by Treasury, Bank of America and Citigroup, with the approval of their federal banking regulators, repurchased preferred shares and trust preferred shares from Treasury in December 2009. The holding companies and their regulators share the duty of identifying the appropriate time to repay the assistance provided through Treasury’s purchase of preferred equity. The regulators leveraged their onsite examiners to provide information on the overall health of the banks and their efforts to raise capital. In September 2009, Bank of America and Citigroup initiated the process by informing the Federal Reserve that they wanted to redeem their TARP funds. Federal Reserve officials told us that in conjunction with FDIC and OCC, they reviewed Bank of America’s and Citigroup’s capital positions and approved the requests using primarily two criteria. First, the institutions had to meet the TARP redemption requirements outlined under SCAP. Second, they had to raise at least 50 percent of the redemption amount from private capital markets. In December 2009, Bank of America and Citigroup redeemed the preferred shares and the trust preferred shares, respectively, that Treasury held. In contrast to the process of unwinding trust preferred shares, in developing a divestment strategy for the common stock held in Citigroup, Treasury and its asset manager will evaluate market conditions and time the sale in an attempt to maximize taxpayers return. On December 17, 2009, Treasury announced a plan to sell its Citigroup common stock over a 6- to 12-month time frame. Treasury plans to use independent investment firms to assist in an orderly sale of these shares. A recent example of the difficulties that could be encountered occurred when Treasury announced plans to sell its Citigroup common shares in December 2009 following share sales by Bank of America and Wells Fargo. Market participants said at that time the supply of bank shares in the market exceeded demand and thus lowered prices. Selling the Citigroup shares in that market environment would have recouped less money for the taxpayers, so Treasury postponed the proposed sales. In March 2010, Treasury announced that it hired Morgan Stanley as its sales agent to sell its shares under a pre-arranged written trading plan. In April 2010, Treasury further announced that Citigroup had filed the necessary documents with SEC covering Treasury’s plan sale. According to Treasury’s press release, it began selling common shares in the market in an orderly fashion under a prearranged written trading plan with Morgan Stanley. Initially, Treasury provided Morgan Stanley with discretionary authority to sell up to 1.5 billion shares under certain parameters outlined in the trading plan. However, Treasury said that it expects to provide Morgan Stanley with authority to sell additional shares beyond this initial amount. According to Treasury officials, Morgan Stanley is providing on-going advice and ideas to Treasury regarding the disposition in order to assist Treasury in meeting its objectives. To manage its debt and equity investment in the automotive companies that received assistance and determine when and how to exit, Treasury monitors industry and broader economic data, as well as company-specific financial metrics. The information is important both for Treasury’s management of its equity in the companies and the repayment of the companies’ term loans, because it enables Treasury to determine how receptive the market will be to an equity sale—which affects the price at which Treasury can sell—and how likely it is that the companies will have sufficient liquidity to repay the loans. While the companies in the other categories discussed in this section also rely on the economic well-being of the country, consumer purchases of new cars are highly correlated with the health of the overall economy, making these broader measures especially relevant when discussing the automotive industry. In addition to monitoring industry and broader economic data, Treasury reviews financial, managerial, and operational information that the companies are required to provide under the credit and equity agreements with Treasury. Treasury will also be monitoring, as needed, information beyond that which is delineated in these agreements with Treasury, for example updates on current events such as the sale of the Saab brand. The companies provide the information, as needed, and the items specified in the agreements to Treasury in monthly reporting packages. Treasury officials said that they reviewed and analyzed the reports they received to identify issues, such as actual market share that lagged behind the projected market share, excess inventory, or other signs that business might be declining. While Treasury has maintained that it will not direct the companies to take specific actions, it does notify the companies’ management and the Secretary of the Treasury if it sees any cause for concern in the financial reports, such as actual market share lagging behind projected market share. In addition to reviewing financial information, Treasury officials meet quarterly in person with the companies’ top management to discuss the companies’ progress against their own projections and Treasury’s projections. Important findings that result from the review of financial reports or management meetings are conveyed to key staff in OFS and other Treasury offices with responsibilities for managing TARP investments. This level of access was the result of the various legal and other agreements with the companies. Treasury will determine when and how to divest itself of its equity stake in GM, Chrysler, and GMAC. Treasury officials said that they would consider indicators such as profitability and prospects, cash flow, market share, and market conditions to determine the optimal time and method of sale. However, these efforts are complicated by the fact that Treasury shares ownership of GM and Chrysler with the Canadian government and other third parties. Treasury has yet to announce a formal exit plan but has publicly stated that a public offering of its shares in GM is likely, and, in June 2010, provided guidance on its role in the exploration of a possible initial public offering of the common stock of GM. Treasury is still considering both a public offering and a private sale of the common stock it owns in Chrysler. The companies’ term loans—the other component of Treasury’s investment—were scheduled to be repaid by July 2015 for GM and by June 2017 for Chrysler. In April 2010, GM repaid the remaining balance on the $6.7 billion loan from Treasury. GM made this payment using funds that remained from the $30.1 billion Treasury had provided in June 2009 to assist with its restructuring. Our November 2009 report on the auto industry noted that the value of GM and Chrysler would have to grow tremendously for Treasury to approach breaking even on its investment, requiring that Treasury temper any desire to exit as quickly as possible with the need to maintain its equity stake long enough for the companies to demonstrate sufficient financial progress. This report also included three recommendations related to Treasury’s approach to managing its assets and divesting itself of its equity stake in Chrysler and GM. First, we recommended that Treasury ensure that it has the expertise needed to adequately monitor and divest the government’s investment in Chrysler and GM, and obtain needed expertise where gaps are identified. Following this recommendation, Treasury hired two additional staff to work on the Auto Team, which is composed of analysts dedicated solely to monitoring Treasury’s investments in the companies. Treasury also hired Lazard LLC in May 2010 to act as an advisor on the disposition of Treasury’s investment in GM. Second, we recommended that Treasury should report to Congress on its plans to assess and monitor the companies’ performance to help ensure that they are on track to repay their loans and to return to profitability. In response to this recommendation, Treasury stated that it already provides updates to TARP oversight bodies including the Congressional Oversight Panel and SIGTARP, concerning the status of its investments and its role in monitoring the financial condition of Chrysler and GM and that it will provide additional reports as circumstances warrant. Third, we recommended that Treasury develop criteria for evaluating the optimal method and timing for divesting the government’s ownership stake in Chrysler and GM. In response to this recommendation, Treasury stated that members of the Auto Team are experienced in selling stakes in private and public companies and are committed to maximizing taxpayer returns on Treasury’s investment. Treasury also stated that private majority shareholders typically do not reveal their long-term exit strategies in order to prevent other market participants from taking advantage of such information. However, we note that because Treasury’s stakes in the companies represent billions of taxpayer dollars, Treasury should balance the need for transparency about its approach with the need to protect certain proprietary information, the release of which could put the companies at a competitive disadvantage or negatively affect Treasury’s ability to recover the taxpayers’ investment. Moreover, Treasury could provide criteria for an exit strategy without revealing the precise strategy. Although GMAC is a bank holding company, it received assistance under AIFP. While investment in GMAC was previously managed by Treasury’s Auto Team, the investment in GMAC is currently managed by other Treasury officials. This team uses many of the same indicators that are used for bank holding companies. For instance, to monitor GMAC’s condition, the Treasury’s team views liquidity and capital levels at the company and observes management’s strategic decision making. Due to it not being publicly traded and the challenges it faces in its transition to a more traditional bank holding company model, Treasury is more actively involved in managing and valuing its investment in the company. As of January 27, 2010, Treasury had not decided how it would divest its GMAC preferred shares or recommended a time frame for the divestment. The Federal Reserve and FDIC will be involved in the approval process that would allow GMAC to exit TARP by repurchasing its preferred shares. Treasury could recover its investment in GMAC preferred shares through the same process used to exit its preferred equity investments in Citigroup and Bank of America, but other options exist. For example, Treasury could sell its preferred shares to a third party, convert its preferred shares into common equity and sell those shares, or hold the preferred shares to maturity. Throughout 2009, the company continued to experience significant losses as it attempted to follow through on its strategies as a relatively new, independent company. As we have seen, Treasury purchased $3.8 billion in preferred shares ($2.54 billion of trust preferred shares and $1.25 billion of mandatory convertible preferred shares) from GMAC on December 30, 2009, because the company could not raise capital in the private markets to meet its SCAP requirements. According to Treasury officials, for its common stock in GMAC, Treasury is continuing to explore many options to exit its investment, including an initial public offering or other alternatives. Divesting itself of GMAC’s common stock will be more difficult because the shares are not currently publicly traded. Treasury could divest its GMAC common stock through multiple methods, including by making a public offering of its shares as company officials have suggested, selling the stock to a buyer or buyers through a private sale, or selling the stock back to the company as the company builds up capital. The Federal Reserve, FRBNY, and Treasury share responsibility for managing the government’s loan to and investment in AIG, but the trustees and Treasury must develop exit strategies for divesting their interest in AIG. The Federal Reserve and FRBNY have different roles than they do in overseeing the bank holding companies, because their relationship with AIG is not a supervisory one but a relationship between creditor and borrower. The Federal Reserve and FRBNY have acted to ensure that AIG maintains adequate capital levels after it suffered a severe loss of capital in 2008 that compromised its ability to sell certain businesses and maintain its primary insurance subsidiaries as viable businesses. A strengthened balance sheet, access to new capital, profitability, and lower risk levels are important in tracking AIG’s progress in returning to financial health. In order to monitor this progress, the Federal Reserve, FRBNY, and Treasury use various indicators, including liquidity, capital levels, profit and loss, and credit ratings. Although each of these entities monitors AIG independently, they share information on such indicators as cash position, liquidity, regulatory reports, and other reports as necessary. AIG is also responsible for regularly providing periodic internal reports as specified in the FRBNY credit agreement and the Treasury securities purchase agreements. According to the AIG trustees, in monitoring AIG, they rely on information gathered by the FRBNY, Treasury, and AIG, and their respective outside consultants, to avoid, to the extent possible, redoing work that has already been done at unnecessary cost. The AIG trustees are responsible for voting the trust stock, working with AIG and its board of directors to ensure corporate governance procedures are satisfactory, and developing a divestiture plan for the sale or other disposition of the trust stock. As we have seen, government assistance to AIG was provided by or is held by FRBNY, the AIG Trust, and Treasury, which are independently responsible for developing and implementing a divestment plan and must coordinate their actions. Over time, more of the government’s credit exposure has been converted to equity that potentially poses greater risk to the federal government. For example, Treasury purchased $40 billion of preferred shares and the proceeds were used to pay down the balance of the FRBNY Revolving Credit Facility. More recently, in December 2009, FRBNY accepted preferred equity interest in two AIG-created special purpose vehicles that own American International Assurance Company, Ltd and American Life Insurance Company—AIG’s leading foreign life insurance companies. In exchange, FRBNY reduced the amount AIG owed on the Revolving Credit Facility by $25 billion. Repayment of AIG’s remaining $27 billion debt will depend, in part, on the markets’ willingness to finance the company with new funds following its return to financial health. According to officials at Treasury and the Federal Reserve, AIG must repay the FRBNY credit facility before the AIG Trust can, as a practical matter, divest its equity shares. As a result, the AIG trustees said that they would begin developing an exit strategy once AIG had repaid its debt to FRBNY, which is due no later than September 13, 2013. According to the AIG trustees and Treasury officials, while Treasury and the AIG Trust are responsible for developing independent exit strategies, they plan to coordinate their efforts. The Treasury team that manages the AIG investment has been running scenarios of possible exit strategies but has not decided which strategy to employ. A number of options are being considered by the AIG trust for divesting the Series C Preferred Stock, one of which is to convert the Series C Preferred Stock to common stock and divest such common stock through a public offering or a private sale. Treasury has multiple options available for divesting its preferred shares, including having AIG redeem Treasury’s shares, converting the shares to common stock that would subsequently be sold in a public offering, or selling the shares to an institutional buyer or buyers in a private sale. According to Treasury officials, Treasury is devoting significant resources to planning the eventual exit strategy from its AIG investments. When AIG will be able to pay the government completely back for its assistance is currently unknown because the federal government’s exposure to AIG is increasingly tied to the future health of AIG, its restructuring efforts, and its ongoing performance as more debt is exchanged for equity. Therefore, as we noted in our April 2010 report on AIG, the government’s ability to fully recoup the federal assistance will be determined by the long-term health of AIG, the company’s success in selling businesses as it restructures, and other market factors such as the performance of the insurance sectors and the credit derivatives markets that are beyond the control of AIG or the government. In March 2010, the Congressional Budget Office estimated that the financial assistance to AIG may cost Treasury as much as $36 billion compared to the $30 billion estimated in September 2009 by Treasury. While AIG is making progress in reducing the amount of debt that it owes, this is primarily due to the restructuring of the composition of government assistance from debt to equity. FHFA, in its roles as conservator, safety and soundness supervisor, and housing mission regulator for the Enterprises, has adopted several approaches to monitoring their financial performance and operations. FHFA officials said that they have monitored the Enterprises’ financial performance in meeting the standards established in the scorecards and will continue to do so. Further, FHFA monitors, analyzes, and reports on the Enterprises’ historical and projected performance on a monthly basis. FHFA provides information based on public and nonpublic management reports, and the fair value of net assets is defined in accordance with generally accepted accounting principles. In addition, FHFA officials said that the agency’s safety and soundness examiners are located at the Enterprises on a full-time basis, and also monitor their financial performance, operations, and compliance with laws and regulations through conducting examinations, holding periodic meetings with officials, and reviewing financial data, among others things. FHFA is significantly involved as conservator with the Enterprises when it comes to reporting financial information and requesting funding from Treasury. FHFA puts together a quarterly request package that is reviewed through several levels of management, and it is ultimately signed off on by the Acting Director of FHFA before it is sent to the Under Secretary for Domestic Finance at Treasury for approval as the official request for funding. Although the structure of the assistance to the Enterprises has remained constant, the amount of assistance has steadily increased. Treasury increased the initial funding commitment cap from $100 billion to $200 billion per Enterprise in February 2009, and the decision was made in December 2009 to lift the caps to include losses from 2010 through 2012. Treasury stated it raised the caps when it did because its authority to purchase preferred shares under HERA expired on December 31, 2009. While Treasury did not believe the Enterprises would require the full $200 billion authorized per Enterprise prior to December 31, 2009, it lifted the caps to reassure the markets that the government would stand behind them going forward. At the end of first quarter 2010, Treasury had purchased approximately $61.3 billion in Freddie Mac preferred stock and $83.6 billion in Fannie Mae preferred stock under the agreements. While FHFA and Treasury are monitoring the Enterprises’ financial performance and mission achievement through a variety of means, exit strategies for the Enterprises differ from those for the other companies that have also received substantial government assistance. Given the ongoing and significant financial deterioration of the Enterprises—the Congressional Budget Office projected that the operations of the Enterprises would have a total budgetary cost of $389 billion over the next 10 years—FHFA and other federal officials have said that the Enterprises will probably not be able to return to their previous organizational structure as publicly-owned private corporations with government sponsorship. Many observers have stated that Congress will have to re- evaluate the roles, structures, and performance of the Enterprises and to consider options to facilitate mortgage financing while mitigating safety and soundness and systemic risk concerns. In a September 2009 report, we identified and analyzed several options for Congress to consider in revising the Enterprises’ long-term structures. These options generally fall along a continuum, with some overlap in key areas. Establishing the Enterprises as government corporations or agencies. Under this option, the Enterprises would focus on purchasing qualifying mortgages and issuing mortgage-backed securities but eliminate their mortgage portfolios. FHA, which insures mortgages for low-income and first-time borrowers, could assume additional responsibilities for promoting homeownership for targeted groups. Reconstituting the Enterprises as for-profit corporations with government sponsorship but placing additional restrictions on them. While restoring the Enterprises to their previous status, this option would add controls to minimize risk. For example, it would eliminate or reduce mortgage portfolios, establish executive compensation limits, or convert the Enterprises from shareholder-owned corporations to associations owned by lenders. Privatize or terminate them. This option would abolish the Enterprises in their current form and disperse mortgage lending and risk management throughout the private sector. Some proposals involve the establishment of a federal mortgage insurer to help protect mortgage lenders against catastrophic mortgage losses. While there is no consensus on what the next steps should be, whatever actions Congress takes will have profound impacts on the structure of the U.S. housing finance system. The Enterprises’ still-dominant position in housing finance is an important consideration for any decision to establish a new structure. Finally, some of the companies receiving exceptional assistance have taken a number of steps to repay the financial assistance owed the government and to repurchase their preferred shares in light of the significant restrictions put in place to encourage companies to begin to repaying and exiting the programs as soon as practicable. At the same time, the government continues to take steps to establish exit strategies for the remaining companies and in some cases the federal government’s financial exposure to these companies may exist for years before the assistance is fully repaid. In other cases, the federal government may not recover all of the assistance provided. For example, where the government has an equity interest, its ability to recover what has been invested depends on a variety of external factors that are beyond the control of the institution and the government. Moreover, as of June 1, 2010, the Enterprises have continued to borrow from Treasury. However, ongoing monitoring of the institutions and the government’s role continues to be important and other additional insights may continue to emerge as aspects of the crisis continue to evolve, including mortgage foreclosures and how best to continue to stabilize housing markets. Assistance that the federal government provided in response to the recent financial crisis highlights the challenges associated with government intervention in private markets. Building on lessons learned from the financial crises of the 1970s and 1980s, we identified guiding principles at that time that help to serve as a framework for evaluating large-scale federal assistance efforts and provided guidelines for assisting failing companies, including the government’s actions during the most recent crisis. These principles include (1) identifying and defining the problem, (2) determining national interests and setting clear goals and objectives that reflect them, and (3) protecting the government’s interests. The government generally adhered to these principles during this recent crisis. But because of its sheer size and scope, the crisis presented unique challenges and underscored a number of lessons to consider when the government provides broad-based assistance. First, widespread financial problems, such as those that occurred in this crisis, require comprehensive, global actions that must be closely coordinated. For example, Treasury’s decision to provide capital investments in financial institutions was driven in part by similar actions in other countries. Second, the government’s strategy for managing its investments must include plans to mitigate perceived or potential conflicts that arise from the government’s newly acquired role as shareholder or creditor and its existing role as regulator, supervisor, or policymaker. Acquiring an ownership interest in private companies can help protect taxpayers by enabling the government to earn returns when it sells its shares and the institutions repurchase their shares or redeem their warrants. But this scenario can also create the potential for conflict if, for example, public policy goals are at odds with the financial interests of the firm receiving assistance. Further, the federal government’s intervention in private markets requires that those efforts be transparent and effectively communicated so that citizens understand policy goals, public expenditures, and expected results. The government’s actions in the recent crisis have highlighted the challenges associated with achieving both. The government also needs to establish an adequate oversight structure to help ensure accountability. Finally, the government must take steps to mitigate the moral hazard that can arise when it provides support to certain entities that it deems too big or too systemically significant to fail. Such assistance may encourage risk-taking behavior in other market participants by encouraging the belief that the federal government will always be there to bail them out. Building on lessons learned from the financial crises of the 1970s and 1980s, we identified guiding principles to help serve as a framework for evaluating large-scale federal assistance efforts and provided guidelines for assisting failing companies. Identifying and defining the problem, including separating issues that require immediate response from longer-term structure issues. Determining national interests and setting clear goals and objectives that reflect them. Protecting the government’s, and thus the taxpayer’s, interests by working to ensure not only that financial markets continue to function effectively, but also that any investments made provide the highest possible return. This includes requiring concessions from all parties, placing controls over management, obtaining collateral when feasible, and being compensated for risk. During the recent financial crisis, the government faced a number of challenges in adhering to these three principles—which we identified during earlier government interventions in the private markets—when it provided financial assistance to troubled companies. First, the scope and rapid evolution of this crisis complicated the process of identifying and defining the problems that needed to be addressed. Unlike past crises that involved a single institution or industry, the recent crisis involved problems across global financial markets, multiple industries, and large, complex companies and financial institutions. For example, problems in mortgage markets quickly spread to other financial markets and ultimately to the broader economy. As the problems spread and new ones emerged, the program goals Treasury initially identified often seemed vague, overly broad, and conflicted. Further, because the crisis affected many institutions and industries, Treasury’s initial responses to each affected institution often appeared ad hoc and uneven, leading to questions about its strategic focus and the transparency of its efforts. During a financial crisis, identifying and defining problems involves separating out those issues that require an immediate response from structural challenges that will take longer to resolve. The most recent crisis evolved as the crisis unfolded and required that the government’s approach change in tandem. Treasury created several new programs under TARP to address immediate issues, working to stabilize bank capital in order to spur lending and restart capital markets and seeking ways to help homeowners facing foreclosure. While banks have increased their capital levels and these companies have begun repaying the government assistance, constructing relevant solutions to address the foreclosure crisis has proved to be a long-term challenge. The recently enacted financial services reform legislation requires that systemically important financial companies be subject to enhanced standards, including risk- based capital requirements, liquidity requirements, and leverage limits that are stricter than the standards applicable to companies that do not pose similar risk to financial stability. Also, the law creates a procedure for the orderly liquidation of financial companies if the Secretary of the Treasury makes certain determinations including a determination that the failure of the company and its resolution under otherwise applicable law would have serious adverse effect on financial stability. Second, determining national interests and setting clear goals and objectives that reflect them requires choosing whether a legislative solution or other government intervention best serves the national interest. During the recent crisis the federal government determined that stabilizing financial markets, housing markets, and individual market segments required intervening to support institutions it deemed to be systemically significant. It also limited its intervention, stating that it would act only as a reluctant shareholder and not interfere in the day-to- day management decisions of any company, would exercise only limited voting rights, and would ensure that the assistance provided would not continue indefinitely. Further, Treasury emphasized the importance of having strong boards of directors to guide these companies, as discussed earlier. While the U.S. government developed goals or principles for holding large equity interest in private companies, its goals for managing its investment have at times appeared to conflict with each other. Specifically, Treasury announced that it intended to protect the taxpayer investment, maximize overall investment returns and that it also intended to dispose of the investments as soon it was practicable to do so. However, protecting the taxpayer investment may be at odds with divesting as soon as possible. For example, holding on to certain investments may bring taxpayers a higher return than rapid divestment. Recognizing the tension among these goals, Treasury has tried to balance these competing interests but ultimately, it will have to decide which among them is most important by evaluating the trade-offs. Finally, protecting the government’s and taxpayers’ interest is an essential objective when creating large-scale financial assistance programs that put government funds and taxpayer dollars at risk of loss. Generally consistent with this principle, the government took four primary actions that were designed to minimize this risk. First, a priority was gaining concessions from others with a stake in the outcome—for example, from management, labor, and creditors—in order to ensure cooperation in securing a successful outcome. As we have pointed out previously, as a condition of receiving federal financial assistance, TARP recipients (AIG, Bank of America, Citigroup, GMAC, Chrysler, and GM) had to agree to limits on executive compensation and dividend payments, among other things. Moreover, GM and Chrysler had to use their “best efforts” to reduce their employees’ compensation to levels similar to those at other major automakers that build vehicles in the United States, which resulted in concessions from the United Auto Workers on wages and work rules. Second, exerting control over management became necessary in some cases—including approving financial and operating plans and new major contracts—so that any restructuring plans would have realistic objectives and hold management accountable for achieving results and protecting taxpayer interests. For example, under AIFP, Chrysler and GM were required to develop restructuring plans that outlined their path to financial viability. The government initially rejected both companies’ plans as not being aggressive enough but approved revised plans that included restructuring the companies through bankruptcy. The Federal Reserve has also reviewed AIG’s divestiture plan and routinely monitors its progress and financial condition. Finally, as conservator FHFA maintains substantial control over the business activities of the Enterprises. Third, the government sought to ensure that it was in a first-lien position with AIG, GM, and Chrysler, which received direct government loans, in order to recoup the maximum amounts of taxpayer funds. Treasury was not able to fully achieve this goal in the Chrysler initial loans because the company had already pledged most of its collateral, leaving little to secure the federal government’s loans. Treasury was however able to obtain a priority lien position with respect to its loan to Chrysler post-restructuring. FRBNY was able to obtain collateral against its loans to AIG. Fourth, the government sought compensation for risk through fees and equity participation, routinely requiring dividends on the preferred shares it purchased, charging fees and interest on the loans, and acquiring preferred shares and warrants that provided equity. For example, the government required Bank of America and Citigroup to provide warrants to purchase either common stock or additional senior debt instruments, such as preferred shares, under their financial agreements. As a condition for providing a $85 billion revolving loan commitment, for example, FRBNY initially required that AIG pay an initial gross commitment fee of 2 percent (approximately $1.7 billion) and interest on the outstanding balance, plus a fee on the unused commitment, and in exchange, issue preferred shares (convertible to approximately 79.8 percent of issued and outstanding shares of common stock) into a trust for the benefit of the U.S. Treasury. Treasury’s contractual agreements with the Enterprises detail the terms of the preferred shares, and require them to pay commitment fees, but Treasury has not implemented these fees due to the Enterprises’ financial condition. The size and scope of the recent crisis were unprecedented and created challenges that highlighted principles beyond those based upon the lessons learned from the 1970s and 1980s. These include ensuring that actions are strategic and coordinated both nationally and internationally, addressing conflicts that arise from the government’s often competing roles and the likelihood of external influences, ensuring transparency of and communicating effectively with the Congress and the public, ensuring that a system of accountability exists for actions taken, and taking measures to reduce moral hazard. Financial crises that are international in scope require comprehensive, global actions and government interventions that must be closely coordinated by the parties providing assistance—including agencies of the U.S. government as well as foreign governments—to help ensure that limited resources are used effectively. In prior work, we reported that overseeing large financial conglomerates has proven challenging, particularly in regulating their consolidated risk management practices and identifying and mitigating the systemic risks they pose. Although the activities of these large firms often cross traditional sector boundaries, financial regulators under the current U.S. regulatory system have not always had full authority or sufficient tools and capabilities to adequately oversee the risks that these financial institutions posed to themselves and other institutions. We have laid out several elements that should be included in a strengthened regulatory framework, including using international coordination to address the interconnectedness of institutions, operating cross borders, and helping ensure regulatory consistency to reduce negative, competitive effects. Initial actions during the crisis were taken and coordinated by the Federal Reserve, Treasury, and FDIC, and some were made in conjunction with similar actions by foreign governments. For example, the United States and several foreign governments took a variety of actions including providing liquidity and capital infusions and temporarily banning the short selling of financial institution stock. On September 6, 2008, initial government actions that were taken to support the Enterprises were due to their deteriorating financial condition, with worldwide debt and other financial obligations totaling $5.4 trillion, and their default on those obligations would have significantly disrupted the U.S. financial system and the global system. Shortly afterwards, as several other large financial firms came under heavy pressure from creditors, counterparties, and customers, the Federal Reserve used its authority under Section 13(3) to create several facilities to support the financial system and institutions that the government would not have been able to assist without triggering this authority, prior to the creation of TARP. The global nature of these companies added to the challenges for the federal government and international community as it resolved these issues. Concerted federal government attempts to find a buyer for the company or to develop an industry solution for Lehman Brothers failed to address its financing needs. According to Federal Reserve officials, the company’s available collateral was insufficient to obtain a Federal Reserve secured loan of sufficient size to meet its funding needs. In the case of AIG, after contacting the FRBNY on September 12, 2008, the U.S. government took action because of its relationships with other global financial institutions and coordinated with regulators in a number of countries. According to AIG’s 2008 10-K, AIG had operations in more than 130 countries and conducted a substantial portion of its general insurance business and a majority of its life insurance business outside the United States. Because of its global reach, the company was subject to a broad range of regulatory and supervisory jurisdictions, making assisting the company with its divestment plans extremely difficult. In light of AIG’s liquidity problems, AIG and its regulated subsidiaries were subject to intense review, with multiple foreign regulators taking supervisory actions against AIG. On September 16, 2008, the Federal Reserve and Treasury determined that the company’s financial and business assets were adequate to secure an $85 billion line of credit, enough to avert its imminent failure. In October 2008, in an unprecedented display of coordination, six central banks—the Federal Reserve, European Central Bank, Bank of England, Swiss National Bank, Bank of Canada, and the central bank of Sweden— acted together to cut short-term interest rates. In a coordinated response, the Group of Seven finance ministers and central bank governors announced comprehensive plans to stabilize their banking systems— making a critical promise not to let systemically important institutions fail by offering debt guarantees and capital infusions, and increasing deposit insurance coverage. Within 2 weeks of enacting TARP, consistent with similar actions by several foreign governments and central banks, Treasury—through the newly established Office of Financial Stability—announced that it would make available $250 billion to purchase senior preferred shares in a broad array of qualifying institutions to provide additional capital that would help enable the U.S. institutions to continue lending. Treasury provided $125 billion in capital purchases for nine of the largest public financial institutions, including Bank of America and Citigroup, considered by the federal banking regulators and Treasury to be systemically significant to the operation of the financial system. Together these nine financial institutions held about 55 percent of the U.S. banking assets and had significant global operations—including retail and wholesale banking, investment banking, and custodial and processing services—requiring coordinated action with a number of foreign governments. The government’s ownership of common shares in private companies can create various conflicts and competing goals that must be managed. First, having an ownership interest in a private company gives the government voting rights that can influence the firm’s business activities. However, Treasury has limited its voting rights to only matters that directly pertain to its responsibility under EESA to manage its investments in a manner that protects the taxpayer. For example, Treasury used its voting rights elect directors to Citigroup’s board, approve the issuance of common shares, and a reverse stock split. Likewise, Treasury has designated directors to serve on Chrysler, GM, and GMAC’s boards of directors. Second, when the government is both investor and regulator for the same company, federal agencies may find themselves in conflicting roles. For instance, as noted in our April 2010 report on Chrysler and GM pensions, until Treasury either sells or liquidates the equity it acquired in each company, the government’s role as shareholder creates potential tensions with its roles as pension regulator and insurer. This can be illustrated by the conflicting pressures that would likely arise in two critical and interrelated scenarios: (1) how to decide when to sell the government’s shares of stock and (2) how to respond to a decline in pension funding. If either or both companies return to profitability then the government’s multiple roles are less likely to result in any perceived conflicts. However, if either company had to be liquidated, the government would face these perceived conflicts, because Treasury would have to make decisions relating to the value of its investments and the Pension Benefit Guaranty Corporation would need to make decisions related to the companies’ pensions. Additionally, on December 11, 2009, the Internal Revenue Service, a bureau within Treasury, issued a notice stating that under certain circumstances selling stock that Treasury received under any TARP program would not trigger an ownership change. As a result, when Treasury sells such shares there is no change in ownership for tax purposes, and the companies would not be required to make changes that limit net operating losses after a change in ownership. Some in Congress have argued that this action created an additional subsidy to the financial institutions that received federal assistance and by reducing potential revenue from taxes, it conflicts with Treasury’s duty to take actions that are in the best interest of the taxpayers. The assistance to the Enterprises illustrates the potential challenges that can arise when the government uses its assistance to further its public policy goals—in this case, managing support for the home mortgage markets and efforts to preserve and conserve assets. Specifically, Treasury is pursuing public policy goals to address mortgage foreclosures through the Enterprises, but these actions could also potentially negatively affect the Enterprises’ financial condition. For example, the Enterprises are participating in the administration’s foreclosure prevention programs by modifying the terms of mortgages insured or owned by the Enterprises to prevent avoidable foreclosures by lowering the borrower’s monthly mortgage payments. Treasury and FHFA have argued that such programs, by improving borrowers’ financial condition, will also benefit the Enterprises, which have large holdings of delinquent mortgages. However, the Enterprises have stated in their financial disclosures that these programs may result in significant costs over time, such as incentive payments made to servicers and borrowers over the life of the modification and losses associated with borrower redefaults on modified mortgages. Whether loan modifications would benefit both borrowers and the Enterprises or further jeopardize the Enterprises’ financial condition is unknown and may depend in part on how the program is implemented and overseen by FHFA and Treasury over time. Overseeing the programs aimed at reducing costs to taxpayers remains a challenge. Being both a creditor and a shareholder in private companies creates another conflict for the government. As a major creditor, the government is more likely to be involved in an entity’s operations than it is if it is acting only as a shareholder, and operational decisions that it imposes could affect returns on taxpayer investments. For example, the government is currently both a creditor and shareholder in Chrysler and was both a creditor and shareholder in GM until GM repaid its $6.7 billion loan on April 20, 2010. Treasury made initial loans to the companies to help them avert bankruptcy, then provided financing that was converted to equity to help them through the bankruptcy and restructuring process. As a creditor, the government obtained rights to impose requirements on the companies’ business, including requiring them to produce a certain portion of their total production in the United States. These requirements established by Treasury as creditor, could negatively affect the companies’ stock price, which in turn could negatively affect the return on investment earned by Treasury, as a shareholder. To manage its different investments, the government has used different strategies—direct management and a trust arrangement—which have different implications for the government and the private companies that may affect how easily it can address conflicts of interest. Directly managing the investments offers two significant advantages. First, it affords the government the greatest amount of control over the investment. Second, having direct control over investments better enables the government to manage them as a portfolio, as Treasury has done under CPP. However, such a structure also has disadvantages. For example, as we have seen, having the government both regulate a company and hold an ownership interest in it can create a real or perceived conflict of interest. A direct investment also requires that the government have staff with the requisite skills to manage it. For instance, as long as Treasury maintains direct control of its equity investment in Citigroup, Chrysler, and GM, among others, it must have staff or hire contractors with the necessary expertise in these specific types of companies. In previous work, we raised concerns about Treasury’s ability to retain the needed expertise to assess the financial condition of the auto companies and develop strategies to divest the government’s interests given the substantial decline in its staff resources and lack of dedicated staff providing oversight of its investments in the automakers. In contrast, the government has used a trust arrangement to manage its investment in AIG. Such an arrangement puts the government’s interest in the hands of an independent third party and helps to avoid potential conflicts that could stem from the government having both regulatory responsibilities for and ownership interests in a company. A trust also helps mitigate perceptions that actions taken with respect to TARP recipients are politically motivated or based on any “inside information” received from the regulators. While Treasury has interpreted TARP as prohibiting placing TARP assets in a trust structure, FRBNY created a trust to manage the government’s ownership interest in AIG before TARP was established. Finally, the varied and sometimes conflicting roles of the government as an owner, creditor, regulator, and policymaker also potentially subject private companies to greater government scrutiny and pressure than they might have otherwise experienced. In particular, the government’s investments in these companies increases the level of government and public oversight and scrutiny these companies receive, as policymakers, elected officials, and regulators work to ensure that taxpayer interests are protected. The companies may also be subject to pressure from government officials to reconsider or alter business decisions that affect the companies’ bottom lines. For example, Chrysler and GM faced pressure to reinstate many of the auto dealerships that had been slated for closure. Government involvement could come from many different sources and in many different forms, including legislative actions and direct communications. To gauge the nature and scope of external influences, we interviewed officials from the six companies that received exceptional financial assistance and reviewed legislation that would place requirements or restrictions on these companies. We also reviewed letters sent to Chrysler and GM officials from legislative and executive branch officials and selected state government officials. We found that the issues receiving the most congressional scrutiny were executive compensation, transparency and accountability, mortgage modifications, and closures of automobile dealerships. Executive compensation. We identified 24 bills that members of Congress introduced in calendar years 2008 and 2009 involving restrictions on executive compensation or additional taxation of executive compensation at companies receiving TARP assistance. Also, AIG officials stated that the majority of congressional contacts they received related to executive compensation and bonuses. Transparency and accountability. We identified 16 bills introduced in calendar years 2008 and 2009 that would require the companies to take steps that would result in increased transparency or accountability, such as reporting on how TARP funds were used. For example, the TARP Transparency Reporting Act would require TARP recipients to report to Treasury on their use of TARP funds. Mortgage modifications. Officials from the companies whose business includes mortgage financing told us that one of the most common subjects of congressional correspondence was requests for modifications to specific constituents’ mortgages. Automobile dealerships. About 60 percent of the bills we identified that specifically targeted the auto industry sought to curtail or prevent the closure of automobile dealerships. One of these bills, which established an arbitration process for dealerships that want to appeal a closure decision, became public law. Furthermore, according to letters from members of Congress that Chrysler and GM provided to us, dealership closure was the most common subject. The letters usually either asked for an explanation of how the closure decisions had been made or for reconsideration of the closure of a particular dealership. (See appendix III for more information on the nature and scope of communication with the auto industry.) Company officials we interviewed told us that the level of government involvement—from requests for appearances at congressional hearings to letters from elected officials—had increased since their companies had requested and received financial assistance from the government. Company officials told us that this involvement was to be expected and did not cause them to make decisions that were in conflict with their respective companies’ best interests. However, these officials also stated that addressing the government’s involvement, such as responding to letters or requests for information, required increased company resources. Federal government intervention in private markets not only requires that these efforts be transparent but also requires that the action include a strategy to help ensure open and effective communication with stakeholders, including Congress and taxpayers. The government’s actions in the recent crisis have highlighted the challenges associated with achieving both of these objectives. Throughout the crisis, Congress and the public often stated that the government actions appeared vague, overly broad, and conflicted. For example, Treasury’s initial response to the crisis focused on providing assistance to individual institutions and appeared ad hoc and uneven, leading to questions about its strategic focus and the transparency of its efforts. Specifically, questions about the government’s decision to assist Bear Stearns and AIG, but not Lehman Brothers, continued months after the decisions were made. Moreover, while TARP was created to provide a comprehensive approach to addressing the unfolding crisis, Treasury’s decision to change the focus of the program weeks after the passage of EESA from purchasing mortgage-backed securities and whole loans to injecting capital into financial institutions caught many in Congress, the markets, and the public by surprise and adversely affected these parties understanding of the program’s goals and priorities which may have undermined the initial effectiveness of the program. In general, transparency means more than simply reporting available information to interested parties, it involves such things as providing clearly articulated guidelines, decision points, and feedback mechanisms to help ensure an adequate understanding of the matters at hand. For the recent actions, transparency would include providing information on how the companies were to be monitored and the results of those activities. However, when considering any federal intervention, part of this decision- making process includes identifying what information can and should be made public and balancing concerns about the public’s “need to know” against disclosing proprietary information in a competitive market. For example, while disclosing detailed information about Treasury’s plans to sell shares of company stock may not be appropriate, the government should communicate its purpose in intervening in the private market and approach for evaluating the success of any federal action. Specifically, making information available to the public on the purpose of federal intervention and the decision to intervene could help ensure that the public understands the implications of not intervening and the expected results from the government’s actions. While EESA required Treasury to report information about TARP activities, Treasury’s failure to adequately communicate the rationale for its actions and decisions early on caused confusion about the motivations behind these actions and decisions and long plagued the program. Treasury’s lack of an effective communication strategy was, in part, the result of the unfolding nature of the crisis but even so, the nature of the unfolding crisis was not effectively communicated. For example, the multifaceted nature of the crisis resulted in numerous TARP programs to address specific problems in the markets; however, Treasury did not establish or adequately explain some of the programs until after assistance had already been announced. Specifically, Treasury announced assistance to Citigroup, Bank of America, and AIG before TIP and SSFI—now called the AIG Assistance Program—were established and announced in January 2009 and November 2008, respectively. Since the inception of TARP, we have recommended that Treasury take a number of actions aimed at developing a coherent communication strategy for TARP. In our previous reports, we have recommended that Treasury develop a communication strategy that included building an understanding and support for the various components of the TARP program. While the actions we suggested were intended to address challenges associated with TARP— such as hiring a communications officer, integrating communications into TARP operations, scheduling regular and ongoing contact with congressional committees and members, holding town hall meetings with the public across the country, establishing a counsel of advisors, and leveraging available technology—most of these suggestions would be applicable when considering a communication strategy for any federal intervention. An effective communication strategy is especially important during rapidly changing market events and could help the public understand the policy goals that the government was trying to achieve and its rationale for spending public funds. When considering government assistance to private companies, providing accountability for taxpayer funds is imperative. The absence of a system for accountability increases the risk that the interests of the government and taxpayers may not be adequately protected and that the programs’ objectives may not be achieved efficiently and effectively. We first highlighted the importance of accountability in implementing TARP in December 2008, which has been reiterated by Congressional Oversight Panel and SIGTARP. Specifically, we noted the importance of establishing oversight structures, including monitoring and other internal controls that can help prevent and detect fraud. Federal action in the midst of a crisis will undoubtedly require that actions be taken at the same time that programs are being established. In December 2008, we reported that a robust oversight system with internal controls specifically designed to deal with the unique and complex aspects of TARP would be key to helping OFS management achieve the desired results. For example, OFS faced the challenge that it needed to develop a comprehensive system of internal controls at the same time that it was reacting quickly to changing financial market events and establishing the program. One area that took time to develop was establishing a plan to help ensure that participating institutions adhered to program requirements or to monitor companies’ compliance with certain requirements, such as executive compensation and dividend restrictions. Therefore, when making any decision to intervene in private markets, Congress and the government must take efforts to provide an appropriate oversight structure. While the federal government’s assistance may have helped to contain a more severe crisis by mitigating potential adverse systemic effects, it also created moral hazard—that is, it may encourage market participants to expect similar emergency actions, thus weakening private or market-based incentives to properly manage risks and creating the perception that some firms are too big to fail. We recently reported that while assisting systemically significant failing institutions may have helped to contain the crisis by stabilizing these institutions and limiting potentially systemic problems, it also may have exacerbated moral hazard. According to regulators and market observers, such assistance may weaken the incentives for large uninsured depositors, creditors, and investors to discipline large complex firms that are deemed too big to fail. In March 2009, Federal Reserve Chairman Bernanke told the Council on Foreign Relations that market perceptions that a particular institution is considered too big to fail has many undesirable effects. He explained that such perceptions reduce market discipline, encourage excessive risk-taking by the firm, and provide artificial incentives for firms to grow. He also noted these beliefs do not create a level playing field, because smaller firms may not be regarded as having implicit government support. Similarly, others have noted how such perceptions may encourage risk-taking. For example, some large financial institutions may be given access to the credit markets at favorable terms without consideration of their risk profile. Before a financial crisis, the financial regulatory framework could serve an important role in restricting the extent to which institutions engage in excessive risk-taking activities resulting from weakened market discipline. For instance, regulators can take pre-emptive steps to mitigate moral hazard by taking the necessary regulatory actions to help ensure that companies have adequate systems in place to monitor and manage risk taking. Any regulatory actions that the government takes to help ensure strong risk management systems at companies of all sizes would help to lessen the need for government intervention. In general, mitigating moral hazard requires ensuring that any government assistance includes terms that make it a last resort and undesirable except in the most dire circumstances and specifying when the government assistance will end. During the recent crisis, the government has included provisions that attached such costs to the provision of assistance, including limiting executive compensation, requiring dividends, and acquiring an ownership interest. Further, while uncertainty about the duration of the crisis makes it difficult to specify timetables for phasing out assistance and investments, it is important to provide a credible “exit strategy” to prevent further disruption in the financial markets when withdrawing government guarantees. While Treasury has articulated its exit strategy for some of the companies we reviewed, the government’s plans for divesting itself of investments in AIG and the Enterprises are less clear. Because the government’s involvement in the private sector creates moral hazard and perpetuates the belief that some institutions are too big or interconnected to fail, critics expressed concern that it can encourage risk-taking. While the debate about whether the government should intervene in private markets to avert a systemic crisis continues, only the future will reveal whether the government will again be faced with the prospect of having to intervene in private markets to avert a systemic crisis. As with other past crises, experience from the most recent crisis offers additional insights to guide government action, should it ever be warranted. Specifically, the government could protect the taxpayer’s interest in any crisis by not only continuing to follow the principles that we have discussed earlier (i.e., identifying and defining the problem, determining a national interest and setting clear goals, and protecting the government’s and taxpayer’s interests) but also by adhering to five additional principles based on the federal government’s experience with the current crisis. Develop a strategic and coordinated approach when comprehensive and global governmental action is required. Take actions to ensure the government has a strategy for managing any investments resulting from its intervention in order to help mitigate perceived or potential conflicts and manage external influence. Ensure that actions are transparent and effectively communicated to help ensure that the public understands what actions are being taken and for what purpose. Establish an adequate oversight structure to ensure accountability. Take steps to mitigate moral hazard by not only ensuring that regulatory and market-based structures limit risk taking before a crisis occurs, but also by creating strong disincentives to seek federal assistance through utilization of stringent requirements. We provided a draft of this report to FHFA, the Federal Reserve, OFS, OCC, and FDIC for their review and comment. In addition, we provided excerpts of the draft of this report to the companies receiving exceptional assistance—AIG, AIG Trust, Bank of America, Chrysler, Citigroup, and GMAC—to help ensure the accuracy of our report. Treasury and FHFA provided us with written comments which are reprinted in appendices IV and V, respectively. Treasury agreed with the report’s overall findings. In its letter, Treasury acknowledged that the additional guiding principles for providing large-scale federal assistance should be considered in any future broad-based government assistance and agreed to weigh these new principles going forward. FHFA, in its letter, acknowledged, as we pointed out in our report, the financial assistance provided to the Enterprises illustrates the potential challenges that can arise when the government uses its assistance to further its public policy goals, particularly the Enterprises’ participation in the administration’s loan modification efforts, such as HAMP. However, the letter noted that the loan modification efforts are central to the goals of the conservatorships and EESA. The letter further explained that efforts like HAMP may help to mitigate the credit losses of the Enterprises because a loan modification is often a lower cost resolution to a delinquent mortgage than foreclosure. The Federal Reserve, FHFA, and Treasury provided us with technical comments that we incorporated as appropriate. In addition, AIG, the AIG Trust, Bank of America, Chrysler, Citigroup, and GMAC also provided us with technical comments that we incorporated as appropriate. We are sending copies of this report to interested congressional committees and members. In addition, we are sending copies FHFA, the Federal Reserve, Treasury, OCC, FDIC, financial industry participants, and other interested parties. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact Orice Williams Brown at (202) 512-8678 or [email protected]. Contact points for GAO’s Office of Congressional Relations and Public Affairs may be found on the last page of this report. Staff who made major contributions to this report are listed in appendix VI. The objectives of our report were to (1) describe how and why the government obtained an ownership interest in the companies, (2) evaluate the extent of government involvement in companies receiving exceptional assistance, (3) describe the government’s monitoring of the companies’ financial viability and exit strategies, and (4) discuss the implications of the government’s ongoing involvement in the companies. The report focused on companies receiving exceptional assistance from the federal government, including American Insurance Group (AIG), Bank of America Corporation (Bank of America), Chrysler Group LLC (Chrysler), Citigroup, Inc. (Citigroup), General Motors Company (GM), and GMAC, Inc. (GMAC), as well as its involvement in Fannie Mae and Freddie Mac (Enterprises). To address the first objective, we reviewed the monthly transactions reports produced Department of Treasury’s (Treasury) Office of Financial Stability (OFS) that lists the structure of federal assistance provided by Treasury to the companies considered receiving exceptional assistance (AIG, Bank of America, Chrysler, Citigroup, and GM) and documentation from Federal Housing Finance Agency (FHFA) to determine the financing structure for the Enterprises. In addition, we reviewed the Board of Governors of the Federal Reserve System’s (Federal Reserve) “Factors Affecting Reserve Balances” H.4.1 documents to determine the assistance provided by Federal Reserve Bank of New York (FRBNY) to AIG. We reviewed the contractual agreements between the government and the companies that governed the assistance. In addition, we reviewed selected Securities Exchange Commission (SEC) filings, Treasury’s Section 105 (a) reports, and other GAO reports on the Troubled Asset Relief Program (TARP). To address the second objective, we reviewed the Emergency Economic Stabilization Act of 2008 (EESA) and the Housing and Economic Recovery Act of 2008 (HERA) to understand the legal framework for any potential government involvement in the companies receiving exceptional assistance, including the establishment of the conservatorship and the contractual agreements established between the government and the companies. We reviewed the credit agreements, securities purchase agreements; assets purchase agreements, and master agreements. To understand the trust structure established for AIG we reviewed the AIG Credit Trust Facility agreement between FRBNY and the AIG trustees. We conducted interviews with officials and staff from the Federal Reserve Board, FHFA, FRBNY, Federal Reserve Bank of Chicago, (FRB-Chicago), Federal Reserve Bank of Richmond, (FRB-Richmond), OFS, the Office of the Comptroller of the Currency (OCC), Federal Deposit Insurance Corporation (FDIC), and SEC. In addition, we interviewed senior management—primarily the Chief Executive Officers and the Chief Financial Officers—for most of the companies in our study, including the Enterprises, and interviewed the AIG trustees to understand their role in the governance of AIG. To address the third objective on evaluating the government’s monitoring of the companies’ financial viability and exit strategies, we interviewed officials from FDIC, Federal Reserve, FHFA, FRBNY, FRB-Chicago, FRB- Richmond, OCC, and OFS. We also interviewed the asset managers who are responsible for monitoring and valuing the equity shares held by Treasury under the Capital Purchase Program, the Targeted Investment Program and the Asset Guarantee Program. We reviewed Treasury documents, such as asset manager reports, TARP transaction reports, and press releases; Treasury testimonies; and press releases from the companies. We also reviewed the contractual agreements between the government and the companies including credit agreements, securities purchase agreements, asset purchase agreements, and master agreements in order to understand the companies’ responsibilities in reporting financial information and the government’s responsibility for monitoring and divesting its interests. Finally, we reviewed a Congressional Oversight Panel report relating to Treasury’s approach on exiting TARP and unwinding its impact on the financial markets. To address the fourth objective relating to the implications of the government’s ongoing involvement in the companies, we reviewed prior GAO work on principles for providing large-scale government assistance and assessed the degree to which the government’s activities under TARP adhered to these principles. To identify actions the government is taking with the potential to influence the companies’ business decisions, we reviewed legislation that would affect TARP recipients and determined what, if any action the legislation would require the companies to take. To identify the nature and scope of contacts TARP recipients received from executive branch agencies, members of Congress, and state government officials, we interviewed government relations staff at AIG, Bank of America, Chrysler, Citigroup, GM, and GMAC. These interviews also provided us with information on the extent of government involvement and influence in the companies’ business operations. For Chrysler and GM, we obtained 277 letters that the companies received from members of Congress, which was the number of letters the companies received during calendar year 2009 and kept on file. We reviewed each of the letters to determine their topic and whether they sought to influence the companies’ business decisions. We also obtained more than 2,300 e-mails that certain senior executives of Chrysler and GM received from congressional and state government officials during calendar year 2009, including 1,221 from Chrysler and 1,098 from GM. Due to the large number of these e-mails, we reviewed a random probability sample of 251 from the 2,319 e-mails the companies provided us with to create estimates about the population of all the e-mails. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as having a margin of error at the 95 percent confidence level of plus or minus 8 percentage points or less. This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. With this probability sample, each member of the study population had a nonzero probability of being included, and that probability could be computed for any member. Finally, we obtained 264 e-mails that certain senior executives at the companies received from White House and Treasury officials in calendar year 2009. After removing e-mails that were out of scope and duplicates, we were left with 109 e-mails, including 89 sent to Chrysler and 20 sent to GM. We reviewed these e-mails to determine their purpose and topic and whether they sought to influence the companies’ business decisions. We provided a draft of this report to FHFA, the Federal Reserve, OFS, OCC, and FDIC for their review and comment. In addition, we provided excerpts of the draft of this report to the companies receiving exceptional assistance—AIG, AIG Trust, Bank of America, Chrysler, Citigroup, and GMAC—to help ensure the accuracy of our report. We conducted this performance audit from August 2009 to August 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provided a reasonable basis for our findings and conclusions based on our audit objectives. Since the fall of 2008, a number of large financial institutions and companies have received more than $447 billion in financial assistance leaving the government with a significant ownership interest in a number of companies. The government provided assistance or funds to American International Group (AIG); Bank of America Corporation (Bank of America); Chrysler; Citigroup, Inc (Citigroup); Fannie Mae and Freddie Mac (Enterprises); General Motors (GM); and GMAC, Inc (GMAC). As of March 31, 2010, the government owned substantial amounts of preferred or common shares in seven companies—AIG, Chrysler, Citigroup, GM, GMAC, and the Enterprises. The total amounts of assistance disbursed to each company are shown in figure 2. The federal government assisted these companies by infusing capital through the purchase of preferred shares, direct loans, guarantees, stock exchanges, or lines of credit that led to the government owning preferred and common shares. Figure 3 shows the variation in the amount of government ownership interest in the companies and the outstanding balance that is owed to the government. The financial institutions and the companies have begun to pay down some of the assistance. GM has repaid the entirety of the debt owed to Treasury under its post-bankruptcy credit agreement, and Chrysler has repaid a portion of its loan from Treasury. As previously noted, whether the government will be recovering all its investment or assistance to Chrysler and GM is unknown. For companies where the government has an ownership stake, the amount of recovery depends on a number of external factors, including the financial health of the companies and the market value of their stock as well as the companies’ ability to repay loans or repurchase preferred shares. Similarly, Treasury still holds common shares in Citigroup. The Enterprises have not repaid any portion of the assistance Treasury has provided and as of June 2010 continued to borrow from Treasury. To provide some additional protection for the taxpayer, Treasury required the companies to commit to certain financial terms and actions. For example, in exchange for the capital infusions in the form of preferred shares, Treasury required AIG, Bank of America, Citigroup, the Enterprises, GM, and GMAC to pay dividends. The dividend rate varied across the seven companies ranging from less than 5 percent to 10 percent for AIG and the Enterprises. As shown in table 6, as of March 31, 2010, Treasury had collected a total of more than $16.2 billion in dividends from Bank of America, Citigroup, the Enterprises, GM, and GMAC. AIG was required to pay dividends at an annual rate of 10 percent on series D cumulative preferred shares prior to when they were exchanged for seri E noncumulative preferred shares, but it had not paid any dividends to Treasury as of March 31, 2010. Series D unpaid dividends were capitaliz thereby increasing the liquida tion preference of the Series E shares for which they were exchanged. The government or, in the case of AIG, FRBNY requires that AIG and Chrysler pay interest on the loans provided. Moreover, Treasury currently holds warrants obtained in connection with the preferred shares that it holds for AIG, Citigroup, and the Enterprises. Because GMAC is a privately held company, Treasury exercised its warrants immediately. On March 3, 2010, Treasury received more than $1.5 billion from its auction of Bank of America’s warrants. To further examine the extent of government involvement in companies receiving Troubled Asset Relief Program (TARP) assistance, we reviewed legislative proposals and government communications with General Motors Company (GM) and Chrysler Group LLC (Chrysler). We examined the following: (1) proposed legislation that would place requirements or restrictions on the companies due to their status as TARP recipients, (2) letters from members of Congress to the companies, and (3) e-mails from congressional offices, state government, White House, and Department of the Treasury (Treasury) officials sent to certain company officials whom we designated. Chrysler and GM officials told us that the level of government involvement—from requests for appearances at congressional hearings to letters from elected officials—had increased since their companies had requested and received financial assistance from the government. They emphasized that the congressional letters and e-mails did not cause them to make decisions that were in conflict with their best interests. However, these officials stated that addressing the government’s involvement, such as responding to letters, audits, or other requests for information, required increased company resources. We identified 38 bills introduced from October 2008, when the Emergency Economic Stabilization Act of 2008 (EESA) was enacted, through January 2010 that would impose requirements or restrictions on GM and Chrysler as TARP recipients. Action on the majority of these bills has been limited since their introduction in Congress, with two having become law. Although the bills cover a range of topics, those among the most commonly addressed by the legislation were dealership closures and executive compensation and bonuses. We identified eight bills that addressed, among other issues, the closure of auto dealerships, a topic specifically directed at automakers accepting TARP funds. Closing dealerships was a way for the companies to reduce their operating costs in an attempt to return to profitability, but since these closures would occur in communities across the country, they prompted considerable congressional interest. The bills generally aimed to curtail or prevent the closure of auto dealerships, as well as plants and suppliers. One of the bills that became public law requires Chrysler and GM to provide to the dealers specific criteria for the closures and gives dealers the right to pursue binding arbitration concerning their closures. The Automobile Dealer Economic Rights Restoration Act of 2009, as introduced in the House and Senate, would require the automakers to restore a dealership based on the dealer’s request. As of July 30, 2010, this bill has not been enacted. We identified 17 bills affecting executive compensation and bonuses for TARP recipients in both the auto and financial industries. Most of these bills would require restrictions on or repeals of executive compensation and bonuses for TARP recipients. For example, the American Recovery and Reinvestment Act, which became law in February 2009, calls for, among other things, limits on compensation to the highest paid executives and employees at firms receiving TARP funding. Other less commonly addressed topics and an example of a bill related to each category are shown in table 7. As of July 30, 2010, these bills have not been enacted. Between May and December 2009, Chrysler and GM received 277 letters from members of Congress, including 65 sent to Chrysler and 212 to GM. Company officials told us that the volume of congressional letters they received sharply increased in the spring of 2009, after the companies received TARP assistance and when many operational changes that were part of their restructuring—such as plant and dealership closures—were being made. In total, 188 individual members of Congress sent letters to the companies over this time period. In terms of the content of the letters, many dealt with specific constituent concerns, with the closing of auto dealerships being the most common topic. Of the letters sent to Chrysler and GM, 68 percent pertained to dealership closures, and the majority of these requested information on specific dealerships in the member’s district or state or provided information for the companies’ consideration when determining whether or not to close specific dealerships. For example, one letter stated that closing a particular dealership would result in customers having to drive up to 120 miles round trip to service their existing vehicle or purchase a new one. Other topics most commonly discussed in the letters included the renegotiation of union contracts with companies that haul cars from manufacturing plants to dealerships (17 percent) and the closure of manufacturing plants (5 percent). None of the letters pertained to executive compensation. Across all letters, 56 percent either explicitly requested a change to the companies’ operations or stated a desired change. Just as dealerships were the focus of most of the letters, dealerships were the focus of the majority of requests for changes as well, with 62 percent suggesting that the companies reconsider the decision to close a particular dealership. The remainder of letters that requested changes pertained to car-hauling contracts (16 percent), plant closures (5 percent), or other business decisions and operations such as the sale of brands (21 percent). We also reviewed e-mails that the companies’ chief executive officers and most senior state and federal government relations officers had received from federal and state officials during calendar year 2009. Our review included e-mails sent by White House officials, the Treasury Department’s chief advisors to the Presidential Task Force on the Auto Industry, members of Congress or their staff, and officials from the five states with the highest proportion of manufacturing in the auto sector. For the purpose of analysis, we grouped the e-mails into federal executive branch officials—Treasury and White House—because these individuals had a defined role in the assistance to the companies, and federal legislative and state officials. For each group, we recorded information on the purpose and topic of each e-mail. According to the documentation the companies provided to us, the designated officials at Chrysler received 89 e-mails from White House and Treasury officials. The designated officials at GM received 20 e-mails. About 60 percent of the e-mails were from Treasury officials and about 40 percent were from White House officials. Sixty-six percent of the e-mails were sent for the purpose of either arranging a call or a meeting between company and government officials (35 percent) or requesting information or input from the companies (31 percent). About 26 percent of the e-mails were sent to provide information to the companies. The topic of more than 33 percent of the e-mails was unclear and more than 60 percent of the e- mails with an unclear topic were sent for the purpose of arranging a call or meeting. Of the e-mails with identifiable topics, the highest number pertained to bankruptcy or restructuring (29 percent of all e-mails) followed by manufacturing plants (12 percent), and dealerships (7 percent). Most of the e-mails that pertained to bankruptcy or restructuring were sent for the purpose of either providing information to or requesting information from the companies (34 percent each). For example, one e- mail requested that Chrysler review and provide comments on a set of talking points on Chrysler’s restructuring. Two of the e-mails—less than 2 percent—requested a change to the companies’ operations or stated a desired change, such as an e-mail concerning GM’s negotiations in a proposed sale of a company asset. Chrysler identified 1,221 e-mails it had received from congressional offices of both parties, mostly from staff, and state officials; GM identified 1,098. Due to the number of e-mails, we reviewed a random probability sample of them in order to develop estimates about the entire group of e-mails. Based on this review, we estimate that 86 percent of these e-mails came from congressional offices and the remaining 14 percent from government officials in the five states included in our analysis. The records in the sample showed that most of the congressional e-mails were sent from staff rather than from members of Congress. The purpose of the vast majority of congressional and state e-mails varied from requesting information to arranging a call or meeting to simply thanking the recipient. Most common were e-mails sent to provide information to the recipient (38 percent), followed by e-mails sent to request information (31 percent), and e-mails to arrange a call or meeting between government and company officials (22 percent). We estimate that 13 percent of the e-mails were sent for other reasons, such as to thank the recipient, or for reasons that could not be determined based on the content of the e-mail. Roughly 1 percent of the congressional and state e- mails—either explicitly requested or stated a desired change to the companies’ operations. The topics of the e-mails varied, with 27 percent focusing on dealerships and 11 percent on manufacturing plants. Thirty- six percent—the largest group—did not reference a specific topic. For example, many of the e-mails sent for the purpose of arranging a call or meeting did not indicate the reason for the requested call or meeting. In addition to the contacts named above, Heather Halliwell, Debra Johnson, Wes Phillips, and Raymond Sendejas (lead Assistant Directors); Carl Barden; Emily Chalmers; Philip Curtin; Rachel DeMarcus; Nancy Eibeck; Sarah Farkas; Cheryl Harris; Grace Haskins; Damian Kudelka; Ying Long; Matthew McDonald; Sarah M. McGrath; Michael Mikota; Susan Michal-Smith; SaraAnn Moessbauer; Marc Molino; Omyra Ramsingh; Christopher Ross; Andrew Stavisky; and Cynthia Taylor have made significant contributions to this report. Troubled Asset Relief Program: Continued Attention Needed to Ensure the Transparency and Accountability of Ongoing Programs. GAO-10- 933T. Washington, D.C.: July 21, 2010. Troubled Asset Relief Program: Treasury’s Framework for Deciding to Extend TARP Was Sufficient, but Could be Strengthened for Future Decisions. GAO-10-531. Washington, D.C.: June 30, 2010. Troubled Asset Relief Program: Further Actions Needed to Fully and Equitably Implement Foreclosure Mitigation Program. GAO-10-634. Washington, D.C.: June 24, 2010. Debt Management: Treasury Was Able to Fund Economic Stabilization and Recovery Expenditures in a Short Period of Time, but Debt Management Challenges Remain. GAO-10-498. Washington, D.C.: May 18, 2010. Financial Markets Regulation: Financial Crisis Highlights Need to Improve Oversight of Leverage at Financial Institutions and across System. GAO-10-555T. Washington, D.C.: May 6, 2010. Troubled Asset Relief Program: Update of Government Assistance Provided to AIG. GAO-10-475. Washington, D.C.: April 27, 2010. Troubled Asset Relief Program: Automaker Pension Funding and Multiple Federal Roles Pose Challenges for the Future. GAO-10-492. Washington, D.C.: April 6, 2010. Troubled Asset Relief Program: Home Affordable Modification Program Continues to Face Implementation Challenges. GAO-10-556T. Washington, D.C.: March 25, 2010. Troubled Asset Relief Program: Treasury Needs to Strengthen Its Decision-Making Process on the Term Asset-Backed Securities Loan Facility. GAO-10-25. Washington, D.C.: February 5, 2010. Troubled Asset Relief Program: The U.S. Government Role as Shareholder in AIG, Citigroup, Chrysler, and General Motors and Preliminary Views on its Investment Management Activities. GAO-10-325T. Washington, D.C: December 16, 2009. Financial Audit: Office of Financial Stability (Troubled Asset Relief Program) Fiscal Year 2009 Financial Statements. GAO-10-301. Washington, D.C.: December 9, 2009. Troubled Asset Relief Program: Continued Stewardship Needed as Treasury Develops Strategies for Monitoring and Divesting Financial Interests in Chrysler and GM. GAO-10-151. Washington, D.C.: November 2, 2009. Troubled Asset Relief Program: One Year Later, Actions Are Needed to Address Remaining Transparency and Accountability Challenges. GAO-10-16. Washington, D.C.: October 8, 2009. Troubled Asset Relief Program: Capital Purchase Program Transactions for October 28, 2008, through September 25, 2009, and Information on Financial Agency Agreements, Contracts, Blanket Purchase Agreements, and Interagency Agreements Awarded as of September 18, 2009. GAO-10-24SP. Washington, D.C.: October 8, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-1048T. Washington, D.C.: September 24, 2009. Troubled Asset Relief Program: Status of Government Assistance Provided to AIG. GAO-09-975. Washington, D.C.: September 21, 2009. Troubled Asset Relief Program: Treasury Actions Needed to Make the Home Affordable Modification Program More Transparent and Accountable. GAO-09-837. Washington, D.C.: July 23, 2009. Troubled Asset Relief Program: Status of Participants’ Dividend Payments and Repurchases of Preferred Stock and Warrants. GAO-09-889T. Washington, D.C.: July 9, 2009. Troubled Asset Relief Program: June 2009 Status of Efforts to Address Transparency and Accountability Issues. GAO-09-658. Washington, D.C.: June 17, 2009. Troubled Asset Relief Program: Capital Purchase Program Transactions for October 28, 2008, through May 29, 2009, and Information on Financial Agency Agreements, Contracts, Blanket Purchase Agreements, and Interagency Agreements Awarded as of June 1, 2009. GAO-09-707SP. Washington, D.C.: June 17, 2009. Auto Industry: Summary of Government Efforts and Automakers’ Restructuring to Date. GAO-09-553. Washington, D.C.: April 23, 2009. Small Business Administration’s Implementation of Administrative Provisions in the American Recovery and Reinvestment Act. GAO-09-507R. Washington, D.C.: April 16, 2009. Troubled Asset Relief Program: March 2009 Status of Efforts to Address Transparency and Accountability Issues. GAO-09-504. Washington, D.C.: March 31, 2009. Troubled Asset Relief Program: Capital Purchase Program Transactions for the Period October 28, 2008 through March 20, 2009 and Information on Financial Agency Agreements, Contracts, and Blanket Purchase Agreements Awarded as of March 13, 2009. GAO-09-522SP. Washington, D.C.: March 31, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-539T. Washington, D.C.: March 31, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-484T. Washington, D.C.: March 19, 2009. Federal Financial Assistance: Preliminary Observations on Assistance Provided to AIG. GAO-09-490T. Washington, D.C.: March 18, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-474T. Washington, D.C.: March, 11, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-417T. Washington, D.C.: February 24, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-359T. Washington, D.C.: February 5, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-296. Washington, D.C.: January 30, 2009. High-Risk Series: An Update. GAO-09-271. Washington, D.C.: January 22, 2009. Troubled Asset Relief Program: Additional Actions Needed to Better Ensure Integrity, Accountability, and Transparency. GAO-09-266T. Washington, D.C.: December 10, 2008. Auto Industry: A Framework for Considering Federal Financial Assistance. GAO-09-247T. Washington, D.C.: December, 5, 2008. Auto Industry: A Framework for Considering Federal Financial Assistance. GAO-09-242T. Washington, D.C.: December 4, 2008. Troubled Asset Relief Program: Status of Efforts to Address Defaults and Foreclosures on Home Mortgages. GAO-09-231T. Washington, D.C.: December 4, 2008. Troubled Asset Relief Program: Additional Actions Needed to Better Ensure Integrity, Accountability, and Transparency. GAO-09-161. Washington, D.C.: December 2, 2008. Guidelines for Rescuing Large Failing Firms and Municipalities. GAO/GGD-84-34. Washington, D.C.: March 29, 1984. | The recent financial crisis resulted in a wide-ranging federal response that included providing extraordinary assistance to several major corporations. As a result of actions under the Troubled Asset Relief Program (TARP) and others, the government was a shareholder in the American International Group Inc. (AIG); Bank of America; Citigroup, Inc. (Citigroup); Chrysler Group LLC (Chrysler); General Motors Company (GM); Ally Financial/GMAC, Inc. (GMAC); and Fannie Mae and Freddie Mac (Enterprises). The government ownership interest in these companies resulted from financial assistance that was aimed at stabilizing the financial markets, housing finance, or specific market segments. This report (1) describes the government's ownership interest and evaluates the extent of government involvement in these companies, (2) discusses the government's management and monitoring of its investments and exit strategies, and (3) identifies lessons learned from the federal actions. This work was done in part with the Special Inspector General for the Troubled Asset Relief Program (SIGTARP) and involved reviewing relevant documentation related to these companies and the federal assistance provided. GAO interviewed officials at Treasury, Federal Reserve, Federal Housing Finance Agency (FHFA), and the banking regulators, as well as the senior executives and relevant officials at the companies that received exceptional assistance. The extent of government equity interest in companies receiving exceptional assistance varied and ranged from owning preferred shares with no voting rights except in limited circumstances (Bank of America until it repurchased its shares in 2009) to owning common shares with voting rights (Chrysler, Citigroup, GM, and GMAC) to acting as a conservator (the Enterprises). In each case, the government required changes to the companies' corporate governance structures and executive compensation. For example, of the 92 directors currently serving on boards of these companies, 73 were elected since November 2008. Many of these new directors were nominated by their respective boards, while others were designated by the government and other significant shareholders as a result of their common share ownership. The level of involvement in the companies varied depending on whether the government served as an investor, creditor, or conservator. For example, as an investor in Bank of America, Citigroup, and GMAC, the Department of the Treasury (Treasury) had minimal or no involvement in their activities. As both an investor in and a creditor of AIG, Chrysler, and GM--as a condition of receiving assistance--the government has required some combination of the restructuring of their companies, the submission of periodic financial reports, and greater interaction with company personnel. FHFA--using its broad authority as a conservator--has instituted a number of requirements and practices that involve them in the Enterprises. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
DOD has taken some steps to implement internal safeguards to help ensure that the NSPS performance management system is fair, effective, and credible; however, we believe continued monitoring of safeguards is needed to help ensure that DOD’s actions are effective as implementation proceeds. Specifically, we reported in September 2008 that DOD had taken some steps to (1) involve employees in the system’s design and implementation; (2) link employee objectives and the agency’s strategic goals and mission; (3) train and retrain employees in the system’s operation; (4) provide ongoing performance feedback between supervisors and employees; (5) better link individual pay to performance in an equitable manner; (6) allocate agency resources for the system’s design, implementation, and administration; (7) provide reasonable transparency of the system and its operation; (8) impart meaningful distinctions in individual employee performance; and (9) include predecisional internal safeguards to determine whether rating results are fair, consistent, and equitable. For example, all 12 sites we visited trained employees on NSPS, and the DOD-wide tool used to compose self- assessments links employees’ objectives to the commands’ or agencies’ strategic goals and mission. However, we determined that DOD could immediately improve its implementation of three safeguards. First, DOD’s implementation of NSPS does not provide employees with adequate transparency over their rating results because it does not require commands or pay pools to publish their respective ratings and share distributions to employees. According to DOD, distributing aggregate data to employees is an effective means for providing transparency, and NSPS program officials at all four components’ headquarters told us that publishing overall results is considered a best practice. In addition, 3 of the 12 sites we visited decided not to publish the overall final rating and share distribution results. Without transparency over rating and share distributions, employees may believe they are not being rated fairly, which ultimately can undermine their confidence in the system. To address this finding, we recommended that DOD require overall final rating results to be published. DOD concurred with this recommendation and, in 2008, revised its NSPS regulations and guidance to require commands to publish the final overall rating results. Second, NSPS guidance may discourage rating officials from making meaningful distinctions in employee performance because this guidance emphasized that most employees should be evaluated at “3” (or “valued performer”) on a scale of 1 to 5. According to NSPS implementing issuance, rating results should be based on how well employees complete their job objectives using the performance indicators. Although DOD and most of the installations we visited emphasized that there was not a forced distribution of ratings, some pay pool panel members acknowledged that there was a hesitancy to award employee ratings in categories other than “3.” Unless NSPS is implemented in a manner that encourages meaningful distinctions in employee ratings in accordance with employees’ performance, there will be an unspoken forced distribution of ratings, and employees’ confidence in the system may be undermined. As a result, we recommended that DOD encourage pay pools and supervisors to use all categories of ratings as appropriate. DOD partially concurred with this recommendation, and in April 2009, DOD issued additional guidance prohibiting the forced distribution of ratings under NSPS. Third, DOD does not require a third party to analyze rating results for anomalies prior to finalizing ratings. To address this finding, GAO recommended that DOD require predecisional demographic and other analysis; however, DOD did not concur, stating that a postdecisional analysis is more useful. Specifically, in commenting on our prior report, DOD stated that its postdecisional analysis of final rating results by demographics was sufficient to identify barriers and corrective actions. We are currently assessing DOD’s postdecisional analysis approach as part of our ongoing review of the implementation of NSPS. Although DOD civilian employees under NSPS responded positively regarding some aspects of the NSPS performance management system, DOD does not have an action plan to address the generally negative employee perceptions of NSPS identified in both the department’s Status of Forces Survey of civilian employees and discussion groups we held at 12 select installations. According to our analysis of DOD’s survey from May 2007, NSPS employees expressed slightly more positive attitudes than their DOD colleagues who remain under the General Schedule system about some goals of performance management, such as connecting pay to performance and receiving feedback regularly. For example, an estimated 43 percent of NSPS employees compared to an estimated 25 percent of all other DOD employees said that pay raises depend on how well employees perform their jobs. However, in some instances, DOD’s survey results showed a decline in employee attitudes among employees who have been under NSPS the longest. Employees who were among the first employees converted to NSPS (designated spiral 1.1) were steadily more negative about NSPS from the May 2006 to the May 2007 DOD survey. At the time of the May 2006 administration of the Status of Forces Survey for civilians, spiral 1.1 employees had received training on the system and had begun the conversion process, but had not yet gone through a rating cycle and payout under the new system. As part of this training, employees were exposed to the intent of the new system and the goals of performance management and NSPS, which include annual rewards for high performance and increased feedback on employee performance. As DOD and the components proceeded with implementation of the system, survey results showed a decrease in employees’ optimism about the system’s ability to fulfill its intent and reward employees for performance. The changes in attitude reflected in DOD’s employee survey are slight but indicate a movement in employee perceptions. Most of the movement in responses was negative. Specifically, in response to a question about the impact NSPS will have on personnel practices at DOD, the number of positive responses decreased from an estimated 40 percent of spiral 1.1 employees in May 2006 to an estimated 23 percent in May 2007. Further, when asked how NSPS compared to previous personnel systems, an estimated 44 percent said it was worse in November 2006, compared to an estimated 50 percent in May 2007. Similarly, employee responses to questions about performance management in general were also more negative from May 2006 to May 2007. Specifically, the results of the May 2006 survey estimated that about 67 percent of spiral 1.1 employees agreed that the performance appraisal is a fair reflection of performance, compared to 52 percent in May 2007. Further, the number of spiral 1.1 employees who agreed that the NSPS performance appraisal system improves organizational performance decreased from an estimated 35 percent to 23 percent. Our discussion group meetings gave rise to views consistent with DOD’s survey results. Although the results of our discussion groups are not generalizable to the entire population of DOD civilians, the themes that emerged from our discussions provide valuable insight into civilian employees’ perceptions about the implementation of NSPS and augment DOD’s survey findings. Some civilian employees and supervisors under NSPS seemed optimistic about the intent of the system however, most of the DOD employees and supervisors we spoke with expressed a consistent set of wide-ranging concerns. Specifically, employees noted (1) NSPS’s negative effect on employee motivation and morale, (2) the excessive amount of time and effort required to navigate the performance management process, (3) the potential influence that employees’ and supervisors’ writing skills have on panels’ assessments of employee ratings, (4) the lack of transparency and understanding of the pay pool panel process, and (5) the rapid pace at which the system was implemented, which often resulted in employees feeling unprepared and unable to find answers to their questions. These negative attitudes are not surprising given that organizational transformations often entail fundamental and radical change that requires an adjustment period to gain employee acceptance and trust. To address employee attitudes and acceptance, OPM issued guidance that recommends—and we believe it is a best practice—that agencies use employee survey results to provide feedback to employees and develop and implement an action plan that guides their efforts to address the results of employee assessments. However, according to Program Executive Office officials, DOD has not developed a specific action plan to address critical issues identified by employee perceptions, because the department wants employees to have more time under the system before making changes. Without such a plan, DOD is unable to make changes that address employee perceptions that could result in greater employee acceptance of NSPS. We therefore recommended, in our September 2008 report, that DOD develop and implement a specific action plan to address employee perceptions of NSPS ascertained from DOD’s surveys and employee focus groups. The plan should include actions to mitigate employee concerns about, for example, the potential influence that employees’ and supervisors’ writing skills have on the panels’ assessment of employee ratings or other issues consistently identified by employees or supervisors. DOD partially concurred with our recommendation, noting that it will address areas of weakness identified in its comprehensive, in- progress evaluation of NSPS and that it is institutionalizing a continuous improvement strategy. Since our 2008 review, NSPS officials at DOD have told us that they are working on an action plan; however, to date the department has not provided us a plan for review. DOD’s implementation of a more performance- and results-based personnel system has positioned the agency at the forefront of a significant transition facing the federal government. We recognize that DOD faces many challenges in implementing NSPS, as any organization would in implementing a large-scale organizational change. NSPS is a new program, and organizational change requires time for employees to accept. Continued monitoring of internal safeguards is needed to help ensure that DOD’s actions are effective as implementation proceeds. Moreover, until DOD develops an action plan and takes specific steps to mitigate negative employee perceptions of NSPS, DOD civilian employees will likely continue to question the fairness of their ratings and lack confidence in the system. The degree of ultimate success of NSPS largely depends on the extent to which DOD incorporates internal safeguards and addresses employee perceptions. Moving forward, we hope that the Defense Business Board considers our previous work on NSPS as it assesses how NSPS operates and its underlying policies. This concludes my prepared statement. I would be happy to respond to any questions that you or members of the subcommittee may have at this time. For further information about this statement, please contact Brenda S. Farrell, Director, Defense Capabilities and Management, at (202) 512-3604, or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Key contributors to this statement include Marion Gatling (Assistant Director), Lori Atkinson, Renee Brown, and Lonnie McAllister. The Departmet of Defense (DOD) i the rocess of imlemeting it ew hanapitsystem for managng civilian ernnel—the NtionaSecrit Pernnel System (NSPS). Ke com of NSPSclde comnsatio, classifictio, anerformance managemet. Imlemetio of NSPS cold hve fr-reching imlictions, ot just for DOD, but for civil ervice reform cross the federovermet. A of Feua 2009, abt 20,000 civilian emloee were under NSPS. Based o GAO’s rior work reviewing erformance managemet i the public ector, GAO develoed anitil lit of safeguard tht NSPS hold iclde to ensure it iir, effective, and credile. I 2008, Cngress directed GAO to evuate, mong other things, the etet DOD imlemeted ccounabilit mechan, iclding thoe i U.S.C. ectio 9902()(7) and other iternasafeguard NSPS. While DOD haske ome teps to imlemet iternasafeguard to ensure tht NSPSir, effective, and credile, ite 2008, GAO found tht the imlemetio of three safeguard cold e imroved. Firt, DOD doe ot reqire third part to anaze rting result for anomlie rior to finalizingtings, and thus it doe ot hve rocess to determie whether rtings re dicriminator efore the re finalized. Witho redeciionaanays, emloeeayck cofidece i the firss and crediilit of NSPS. To ddress thi fiding, GAO recommeded tht DOD reqire redeciional demoaphic and other anays; however, DOD did ot cor, ting th tdeciionaanays more usefl. GAO cotinu to elieve thi recommetioas merit. Secod, the rocessck transpare ecause DOD doe ot reqire comman to publih final rting ditributions, though doing o i recognized as ctice by DOD. Withot transpare over rting ditributions, emloeeay ot elieve the re eingted firl. To ddress thi fiding, GAO recommeded tht DOD reqire publictio of overll final rting result. DOD corred with thi recommetio and i 2008 revied it guidance to reqire such publictio. Third, NSPS guidance may dicoage rting offici from mking meanngl ditictions emloee rtings ecause it idicted tht the mjorit of emloee hold e rted t the “” level, o le of 1 to , resulting heitan to rd rtings other cteorie. Uless imlemetio of NSPScoag meanngl ditictions emloee erformance, emloeeay elieve there i an unspoke forced ditributio of rtings, and their cofidece i the system will undermied. To ddress thi fiding, GAO recommeded tht DOD ecoagpay ool ansupervior to usll cteorie of rtings as approrite. DOD partill corred with thi recommetio, but has ot et tke any ctio to imlemet it. Thi temet i based o GAO’s Stemer 2008 reort, which determied (1) the etet to which DOD has imlemeted iternasafeguard to ensure NSPSasir, effective, and credile; and (2) how DOD civilians erceive NSPS and whctions DOD haske to ddress theercetions. For tht reort, GAO anazed relevant docme and emloee surve result; iterviewed approrite offici; and cocted diussio roups t 12 elected inslltions. GAO recommeded ways to etter ddress the safeguard and emloee ercetions. View GAO-09-464T or key components. For more information, contact Brenda S. Farrell at (202) 512-3604 or [email protected]. Although DOD emloee under NSPS respded itivel regarding ome aspect of erformance managemet, DOD doe ot hve an ctio an to ddress the erll gative emloee ercetions of NSPS. According to DOD’s surve of civilian emloee, erll emloee under NSPS re itive abome aspect of erformance managemet, such as connecting pay to erformance. However, emloee who hd the mot experiece under NSPS howed gative movemet i their ercetions. For exale, the ercet of NSPS emloee who elieve tht NSPS will hve itive effect o DOD’s ernnel ctice declied from antimted 40 ercet i 2006 to 23 ercet i 2007. Some gative ercetions o emered dring diussio roups tht GAO held. For exale, emloee ansupervior were cocered abt the ecessive mount of time reqired to navigate the rocess. While it i reasnable for DOD to llow emloee ome time to ccet NSPS, ot ddressng ertegative emloee ercetions cold jeopardize emloee cceance ansuccessl imlemetio of NSPS. A result, GAO recommeded tht DOD develo and imlemean ctio an to ddress emloee cocerns abt NSPS. DOD partill corred with GAO’s recommetio, but has ot et develoed an ctio an. GAO i recommeding tht DOD imrove the imlemetio of ome safeguard and develo and imlemean ctio an to ddress emloee cocerns abt NSPS. DOD erll corred with or recommetions, with the ecetio of oe reqiring redeciional review of rtings. To view the full product, including the scope and methodology, click on GAO-08-773. For more information, contact Brenda S. Farrell at (202) 512-3604 or [email protected]. Although DOD emloee under NSPS re itive regarding ome aspect of erformance managemet, DOD doe ot hve an ctio an to ddress the erll gative emloee ercetions of NSPS. According to DOD’s surve of civilian emloee, emloee under NSPS re itive abome aspect of erformance managemet, such as connecting pay to erformance. However, emloee who hd the mot experiece under NSPS howed gative movemet i their ercetions. For exale, the ercet of NSPS emloee who elieve tht NSPS will hve itive effect o DOD’s ernnel ctice declied from 40 ercet i 2006 to 23 ercet i 2007. Negative ercetions o emered dring diussio roups tht GAO held. For exale, emloee ansupervior were cocered abt the ecessive mount of time reqired to navigate the rocess. Although the Office of Pernnel Managemet issued guidance recommeding thagcie use emloee surve result to rovide feedback to emloee and imlemean ctio an to guide their effort to ddress emloee assssme, DOD has ot develoed an ctio an to ddress emloee ercetions. While it i reasnable for DOD to llow emloee ome time to ccet NSPS ecause organiztional chang ofte reqire time to djust, it i det to ddress ertegative emloee ercetions. Withosuch an, DOD i unable to mke chang tht cold result i reter emloee cceance of NSPS. Give e-le organiztional change iititive, such as the Departmet of Defens’s (DOD) NtionaSecrit Pernnel System (NSPS), i subsantil commitmet tht will tke to comlete, it i imortant tht DOD anCngress e ket iformed of the fll cot of imlemeting NSPS. Uder the Comtroller Geer’s authorit to coct evuations hi owititive, GAO anazed the etet to which DOD has (1) flltimted totl co associted with the imlemetio of NSPS and (2) expded or oligated fun to degn and imlemet NSPS through fir 2006. GAO iterviewed departmet offici ananazed the NSPS Prom Eective Office’s (PEO), and the milit ervices’ and the Washingto Hedquarter Services’ (herefter referred to as the com) cot etimte and reort of expded and oligated fun. DOD’s Novemer 200timte tht it will cot $18 millio to imlemet NSPS doe ot iclde the fll cot tht the departmet expect to ias result of imlemeting the ew system. Federl financiccounting anrd te tht reliable iformtio the co of federro anctivitie crcil for effective managemet of overmet oertions and recommed tht fll co of ro and their opu rovided to assCngress and eectivekingformed deciions rom rerce and to ensure thro et expected and efficiet result. The fll cot iclde oth thoe co specificll idetifiable to crrt the rom, or direct co, and thoe co thre commo to mltile ro but cannot specificll idetified with any particrom, or idirect co. While the anrd emasize tht fll cot iformtiosstil for managng federro, their ctivitie, and opu, the anrd rovide tht itemay e omitted from cot iformtio if tht omissio wold ot change or iflce the jmet of reasnable er relng the cot iformtio. Based o GAO’s review of docmetio rovided by DOD and diussions with departmet offici, GAO found tht DOD’stimte iclde ome direct co, such as the rt-up and oertio of the NSPS PEO and the develomeand deliver of ew NSPS trng co, but it doe ot iclde other direct co such as the fll sa co of ll civilian and milit ernnel who directl support NSPS ctivitie departmetwide. Before develong ittimte, DOD hot fll defied ll the direct and idirect co eeded to manage the rom. Witho etter etimte, deciioker—withi DOD anCngress—will ot hve comlete iformtio abt whether dequate rerce re eing rovided for imlemeting NSPS. GAO recomme tht DOD defill co eeded to manage NSPS, repare revied etimte of thoe co for imlemeting the system i ccordance with federl financiccounting anrd, and develo comrehensive overht frmework to ensure thll funxpded or oligated to degn and imlemet NSPS re fllapred and reorted. I reviewing drft of thi reort, DOD erll corred with GAO’s recommetions. www.gao.gov/cgi-bin/getrpt?GAO-07-851. The totmount of fun DOD hasxpded or oligated to degn and imlemet NSPSring fi through 2006 cannot e determied ecause DOD has ot eablihed an overht mechanm to ensure tht thee co re fllapred. I May, the NSPS Sior Eective eablihed guidance for trcking and reorting NSPS imlemetio co tht reqire the com to develo mechan to capre thee co and to reort quarterl their co to the NSPS PEO. However, thi guidance doe ot defie the direct and idirect co DOD reqire tht the comapre. DOD’s ervasive financil managemet deficiecieve ee the bas for GAO’s degnatio of thi as hih-rire ce 199. GAO’s review of submitted reort from the com found tht their officiccounting system do ot capre the totl funxpded or oligated to degn and imlemet NSPS. Withoan effective overht mechanm to ensure tht the officiccounting systemapre ll approrite co, DOD anCngress do ot hve viilit over the ctual cot to degn and imlemet NSPS. To view the full product, including the scope and methodology, click on the link above. For more information, contact Derek Stewart at (202) 512-5559 or [email protected]. Peole re criticl to any ag transformtio ecause the defian agy’sltre, develo itowledbase, romote innovtio, anre it mot imortanasset. Thus, trteic hanapitl managemet the Departmet of Defense (DOD) can hel it ml, manage, and m the eole ankill eeded to meet it criticl missio. I Novemer 2003, Cngress rovided DOD with gnificant fleilit to degn moderan rerceanagemesystem. O Novemer 1, 200, DOD and the Office of Pernnel Managemet (OPM) joitl released the final regutions DOD’s ew han rerceanagemesystem, kow as the NtionaSecrit Pernnel System (NSPS). GAO elieve tht DOD’s final NSPS regutions coany of the basic ricile thre constet with rove approche to trteic hanapitl managemet. For insance, the final regutions rovide for (1) flele, cotemor, mrket-based anerformance-orieted comnsatio system—such as pay ban anpay for erformance; (2) iving reter riorit to emloee erformance i it retetio deciions connectio with workforce rihtizing and redctions-i-force; and (3) ivolvemet of emloee reretive throughot the imlemetio rocess, such asvingpportunitie to participate i develong the imlemetingssuance. However, fre ctions will determie whether such labor reltions effort will e meanngand credile. Severl moth ago, with the release of the roed regutions, GAO observed thome part of the han rerceanagemesystem red qtions for DOD, OPM, anCngress to consider i the reas of pay anerformance anagemet, dverctions anapp, and labor managemet reltions. GAO o idetified mltile imlemetio chlleng for DOD oce the final regutions for the ew system were issued. Despite theitive aspect of the regutions, GAO has everreas of cocer. Firt, DOD has considerable work hed to defie the imortant detil for imlemeting it system—such as how emloee erformance expecttions will ligned with the departme’s overll missio an and other measure of erformance, and how DOD wold romote conste anrovide erl overht of the erformance managemesystem to ensure it i dmitered i ir, credile, transparet manner. Theand other criticll imortant detiluse defied i counctio with applicable keholder. Secod, the regutions merel llow, rther than reqire, the use of core cometecie tht can hel to rovide conste and clerl communicte to emloee wht ixpected of them. Third, lthough the regutions do rovide for cotinung collabortio with emloee reretive, the do ot idetif rocess for the cotinungvolvemet of idividual emloee the imlemetio of NSPS. Thi tetimony rovide GAO’s overll observtions elected roviions of the final regutions. Going forwrd, GAO elieve tht (1) DOD wold efit from develong comrehensive communictions trtegy, (2) DOD must ensure tht it has the ecessanstittional ifrastrctre i ce to mke effective use of it ew authoritie, (3) chief managemet officer or imilitiosstil to effectivel rovide sused and committed lederhi to the departme’s overll busss transformtio effort, iclding NSPS, and (4) DOD hold develo rocedre and method to iitite imlemetio effort relting to NSPS. www.gao.gov/cgi-bin/getrpt?GAO-06-227T. To view the full product, including the scope and methodology, click on the link above. For more information, contact Derek B. Stewart at (202) 512-5559 or [email protected]. While GAO trong supportanapitl reform i the federovermet, how it i doe, whe it i doe, and the bas which it i doe canke ll the differece i whether such effort re successl. DOD’s regutions re especill criticaneed to e imlemeted roerl ecause of their otetil imlictions for relted overmetwide reform. I thi regard, ir view, classifictio, comnsatio, criticl hiring, and workforce retrctring reform hold pusued o overmetwide bas efore anpate from any rod-based labor- managemet or drocess reform. The Departmet of Defens’s (DOD) ew ernnel systemthe NtionaSecrit Pernnel System (NSPS)will hve fr- reching imlictions ot just for DOD, bt for civil ervice reform cross the federovermet. The Ntional Defense Athoriztio Act for Fil Yer 2004 gave DOD gnificanauthoritie to redegn the rle, regutions, anrocess thover the way tht more than 700,000 defense civilian emloee re hired, comnsated, romoted, and dicilied. I dditio, NSPS cold erve as model for overmetwide transformtioanapitl managemet. However, if ot roerl degned and effectivel imlemeted, it cold everel imede roress towrd more erformance- and result-based system for the federovermeas whole. DOD’srrerocess to degn it ew ernnel managemesystem cons of foag: (1) develomet of degntions, (2) assssmet of degntions, (3) issuance of roed regutions, and (4) tor public commeeriod, meet and cofer eriod with emloee reretive, an congressionaotifictio eriod. DOD’sitil degn rocessas unrelitic and inapprorite. However, fter trteic reassssmet, DOD djusted it approch to reflect more cautious and delibertive rocess tht ivolved more keholder. Thi reort (1) decribe DOD’s rocess to degn it ew ernnel managemesystem, (2) anaze the etet to which DOD’s rocess reflect ke ctice for successl transformtions, and (3) idetifie the mognificant chlleng DOD fce imlemeting NSPS. DOD’s NSPS degn rocess erll reflect for of elected ke ctice for successl organiztional transformtions. Firt, DOD and OPM hve develoed rocess to degn the ew ernnel system tht i supported b to lederhi both organiztions. Secod, from the oet, et of guiding ricile and ke erformance pameterve guided the NSPS degn rocess. Third, DOD has dedicted tem i ce to degn and imlemet NSPS and manage the transformtio rocess. Forth, DOD hasblihed timelie, lbeit mbitious, and imlemetio . The degn rocess, however, icking two other ctice. Firt, DOD develoed and imlemeted writte communictio trtegy docmet, bt the trtegy ot comrehensive. It doe ot idetif ll keternakeholder and their cocerns, and doe ot tilor ke messag to specific keholder roups. Filre to dequatel consider wide vriet of eole and cltl issuan led to unsuccessl transformtions. Secod, while the rocessasvolved emloee through towll meetings and other mechan, it has ot iclded emloee reretive the working roups tht drfted the degntions. It hold be oted tht 10 federl lbor unionsve filed suit lleng tht DOD filed to bide b the tor reqireme to iclde emloee reretive the develomet of DOD’s ew lbor reltions system authorized as part of NSPS. A successl transformtiousrovide for meanngl ivolvemet b emloee and their reretive to ga their input ito anunderanding of the chang tht will occr. GAO iking recommetions to imrove the comrehensivess of the NSPS communictio trtegy and to evuate the impact of NSPS. DOD did ot cor with oe recommetio anpartill corred with two other. www.gao.gov/cgi-bin/getrpt?GAO-05-730. To view the full product, including the scope and methodology, click on the link above. For more information, contact Derek B. Stewart at (202) 512-555 or [email protected]. DOD will fce mltile imlemetio chlleng. For exale, i dditio to the chlleng of cotinung to ivolve emloee and other keholder anroviding dequate rerce to imlemet the system, DOD fce the chlleng of ensuring an effective, ongoing two-way communictio trtegy and evuating the ew system. I recet tetimony, GAO ted tht DOD’s communictio trtegyust iclde the ctive and viible ivolvemet of number of ke ayer, iclding the Secret of Defense, for successl imlemetio of the system. Moreover, DOD must ensure sused and committed lederhi fter the system ill imlemeted and the NSPS Sior Eective and the Prom Eective Office transitiot of etece. To rovide sused lederhi ttetio to ange of busss transformtioititive, like NSPS, GAO recetl recommeded the cretio of chief managemet officit DOD. The federovermet i eriod of rofound transitio and fce an rray of chlleng and opportunitie to eance erformance, ensure ccounbilit, anitio the natio for the fre. Hih- erforming organiztionsve found tht to successll transform themelve, theust ofteunmell change their cltre o tht the re more result-orieted, customer-focused, and collbortive i nare. To foter such cltre, thee organiztions recognize than effective erformance managemesystem can be trteic tool to drive iternal changanchieve deired result. Pblic ector organiztions both i the Uited Ste anbrod hve imlemeted elected, erll consteet of ke ctice for effective erformance managemet tht collectivel crete cler liage— “lie of ht”—betweedividuaerformance and organiztionasuccess. Thee ke cticeclde the following. 1. Align individual performance expectation with organizational goal. Axplicit lignmet helpsdividua ee the connectio betwee their dil ctivitie and organiztiona. 2. Connect performance expectation to crosscutting goal. Plcing an emas collbortio, iterctio, and temwork cross organiztional bounrie helps trengthe ccounbilit for result. 3. Provide and routinely ue performance information to track organizational prioritie. Idividua userformance iformtio to manage dring the r, idetif erformance gaps, annpoit imrovemet opportunitie. Based o reviousssued reort public ector organiztions’ approche to reiforce idividuaccounbilit for result, GAO idetified ke ctice tht federagciean consider as the develo moder, effective, and credible erformance managemesystem. 4. Require follow-up action to address organizational prioritie. B reqiring and trcking follow-up ctions erformance gaps, organiztions undercore the imortance of holdingdividua ccounble for mking roress their rioritie. 5. Ue competencie to provide a fuller assssment of performance. Cometecie defie the kill ansupporting behvior tht idividua eed to effectivel cotribte to organiztional result. . Link pay to individual and organizational performance. Pay, cetive, and rewrd system tht lik emloee kowlede, kill, and cotribtions to organiztional result re based olid, relible, and transpareerformance managemesystem with dequate safeguard. 7. Make meaningful ditinction in performance. Effective erformance managemesystem trive to rovide candid and constrctive feedbck and the ecessa objective iformtio and docmetio to rewrd to erformer and del with oor erformer. www.gao.gov/cgi-bin/getrpt?GAO-03-488. 8. Involve employee and takeholder to gain ownerhip of performance management tem. Erl and direct ivolvemet helpscrease emloees’ ankeholders’ underanding and owerhi of the system and belief i itirss. To view the full report, including the scope and methodology, click on the link above. For more information, contact J. Christopher Mihm at (202) 512-6806 or [email protected]. 9. Maintain continuity during tranition. Because cltl transformtionske time, erformance managemesystem reiforce ccounbilit for change managemeand other organiztiona. Post-Hearing Questions for the Record Related to the Department of Defense’s National Security Personnel System (NSPS). GAO-06-582R. Washington, D.C.: March 24, 2006. Human Capital: Designing and Managing Market-Based and More Performance-Oriented Pay Systems. GAO-05-1048T. Washington, D.C.: September 27, 2005. Questions for the Record Related to the Department of Defense’s National Security Personnel System. GAO-05-771R. Washington, D.C.: June 14, 2005. Questions for the Record Regarding the Department of Defense’s National Security Personnel System. GAO-05-770R. Washington, D.C.: May 31, 2005. Post-hearing Questions Related to the Department of Defense’s National Security Personnel System. GAO-05-641R. Washington, D.C.: April 29, 2005. Human Capital: Selected Agencies’ Statutory Authorities Could Offer Options in Developing a Framework for Governmentwide Reform. GAO-05-398R. Washington, D.C.: April 21, 2005. Human Capital: Preliminary Observations on Proposed Regulations for DOD’s National Security Personnel System. GAO-05-559T. Washington, D.C.: April 14, 2005. Human Capital: Preliminary Observations on Proposed Department of Defense National Security Personnel System Regulations. GAO-05-517T. Washington, D.C.: April 12, 2005. Human Capital: Preliminary Observations on Proposed DOD National Security Personnel System Regulations. GAO-05-432T. Washington, D.C.: March 15, 2005. Human Capital: Principles, Criteria, and Processes for Governmentwide Federal Human Capital Reform. GAO-05-69SP. Washington, D.C.: December 1, 2004. Human Capital: Implementing Pay for Performance at Selected Personnel Demonstration Projects. GAO-04-83. Washington, D.C.: January 23, 2004. Human Capital: Building on DOD’s Reform Efforts to Foster Governmentwide Improvements. GAO-03-851T. Washington, D.C.: June 4, 2003. Human Capital: DOD’s Civilian Personnel Strategic Management and the Proposed National Security Personnel System. GAO-03-493T. Washington, D.C.: May 12, 2003. Defense Transformation: DOD’s Proposed Civilian Personnel System and Governmentwide Human Capital Reform. GAO-03-741T. Washington, D.C.: May 1, 2003. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | DOD is in the process of implementing this human capital system, and according to DOD, about 212,000 civilian employees are currently under the system. On February 11, 2009, however, the House Armed Services Committee and its Subcommittee on Readiness asked DOD to halt conversions of any additional employees to NSPS until the administration and Congress could properly address the future of DOD's personnel management system. On March 16, 2009, DOD and the Office of Personnel Management (OPM) announced an upcoming review of NSPS policies, regulations, and practices. According to DOD, the department has delayed any further transitions of employees into NSPS until at least October 2009--pending the outcome of its review. Furthermore, on May 14, 2009, the Deputy Secretary of Defense asked the Defense Business Board to form what has become this task group to review NSPS to help the department determine, among others things, whether NSPS is operating in a fair, transparent, and effective manner. This statement focuses on the performance management aspect of NSPS specifically (1) the extent to which DOD has implemented internal safeguards to ensure the fairness, effectiveness, and credibility of NSPS and (2) how DOD civilian personnel perceive NSPS and what actions DOD has taken to address these perceptions. It is based on the work we reported on in our September 2008 report, which was conducted in response to a mandate in the National Defense Authorization Act for Fiscal Year 2008. This mandate also directed us to continue examining DOD efforts in these areas for the next 2 years. We currently have ongoing work reviewing the implementation of NSPS for the second year, and we also will perform another review next year. DOD has taken some steps to implement internal safeguards to help ensure that the NSPS performance management system is fair, effective, and credible; however, we believe continued monitoring of safeguards is needed to help ensure that DOD's actions are effective as implementation proceeds. Specifically, we reported in September 2008 that DOD had taken some steps to (1) involve employees in the system's design and implementation; (2) link employee objectives and the agency's strategic goals and mission; (3) train and retrain employees in the system's operation; (4) provide ongoing performance feedback between supervisors and employees; (5) better link individual pay to performance in an equitable manner; (6) allocate agency resources for the system's design, implementation, and administration; (7) provide reasonable transparency of the system and its operation; (8) impart meaningful distinctions in individual employee performance; and (9) include predecisional internal safeguards to determine whether rating results are fair, consistent, and equitable. For example, all 12 sites we visited trained employees on NSPS, and the DOD-wide tool used to compose self-assessments links employees' objectives to the commands' or agencies' strategic goals and mission. However, we determined that DOD could immediately improve its implementation of three safeguards. Although DOD civilian employees under NSPS responded positively regarding some aspects of the NSPS performance management system, DOD does not have an action plan to address the generally negative employee perceptions of NSPS identified in both the department's Status of Forces Survey of civilian employees and discussion groups we held at 12 select installations. According to our analysis of DOD's survey from May 2007, NSPS employees expressed slightly more positive attitudes than their DOD colleagues who remain under the General Schedule system about some goals of performance management, such as connecting pay to performance and receiving feedback regularly. For example, an estimated 43 percent of NSPS employees compared to an estimated 25 percent of all other DOD employees said that pay raises depend on how well employees perform their jobs. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
NOAA’s Office of Coast Survey provides navigational services intended to ensure the safe and efficient passage of maritime commerce through oceans and coastal waters within U.S. jurisdiction, and in the Great Lakes. In this capacity, the Office of Coast Survey develops, updates, and maintains more than 1,000 nautical charts—maps used for navigating waterways—containing information about water depth, the shape of the water body floor and coastline, the location of possible obstructions, and other physical features within these water bodies. According to NOAA documentation, nautical charts provide information critical to safe navigation, such as symbols that inform ship captains or recreational boaters if an area is shallow or has dangerous conditions that could imperil navigation. Hydrography is the science that informs the surveying methods for collecting the data used to create and update nautical charts. In addition, information collected through hydrographic surveying supports a variety of maritime functions such as port and harbor maintenance, beach erosion and replenishment studies, management of coastal areas, and offshore resource development. NOAA operates four ships that predominantly support hydrographic surveys: the Fairweather, Ferdinand R. Hassler, Rainier, and Thomas Jefferson (see fig.1). The Hassler, commissioned in 2012, is the newest of the four vessels. NOAA also procures and oversees hydrographic surveying and related services from the private sector. NOAA officials said the congressional committee reports accompanying NOAA’s appropriations acts for fiscal years 2007 through 2016 provided about $342 million of the agency’s appropriation for the Hydrographic Survey Priorities/Contracts budget line item. The most recent contracts were awarded in June 2014 to eight hydrographic survey companies for a 5-year period and are valued at up to $250 million over this contract period based on NOAA documents. In addition, according to NOAA officials, NOAA works with other federal agencies to collect hydrographic survey data. For example, the U.S. Army Corps of Engineers provides such data for the federal harbor waterways that support the U.S. port system. NOAA primarily uses two kinds of sonar for hydrographic surveying— multibeam and side scan. Multibeam sonar measures the depth of the water by analyzing the time it takes sound waves to travel from a vessel to the bottom of the water body and back and provides detailed information about the water body floor. Multibeam sonar is generally used in areas such as the northeast United States and Alaska, where the water body floor is complex and often strewn with rocks. See figure 2 for an illustration of a NOAA ship using multibeam sonar. In contrast, in relatively shallow flat areas like those along the mid-Atlantic coast, NOAA uses side scan sonar. Side scan sonar creates an image of the water body floor but does not determine depths. If NOAA finds a shipwreck or obstruction using side scan sonar, it will determine its depth using multibeam sonar. See figure 3 for an illustration of a NOAA ship using side scan sonar. NOAA’s National Ocean Service is responsible for providing data, tools, and services that support mapping, charting, and maritime transportation activities, among other things. Within the National Ocean Service, the Office of Coast Survey directs the agency’s hydrographic surveying operations. In particular, it develops survey specifications, evaluates new technologies, and implements procedures for acquiring hydrographic survey data, processing the data, and producing nautical charts. Within the Office of Coast Survey, the Hydrographic Surveys Division is responsible for planning, managing, and supporting hydrographic surveying operations. This includes compiling, verifying, and certifying hydrographic data, as well as determining hydrographic survey priorities and issuing an annual hydrographic survey prioritization report. The Hydrographic Surveys Division coordinates with NOAA’s Office of Marine and Aviation Operations to plan and schedule NOAA vessels for hydrographic surveying. The Office of Marine and Aviation Operations manages, operates, and maintains NOAA’s fleet of 16 ships, including the 4 ships that predominantly support hydrographic surveying. According to NOAA officials, during fiscal years 2007 through 2016, NOAA expended about $303 million for its in-house hydrographic survey program. The Hydrographic Surveys Division also works with the Hydrographic Services Review Panel, an external committee that advises NOAA on matters related to hydrographic services, including surveying. The review panel, which was required by the Hydrographic Services Improvement Act Amendments of 2002, is composed of 15 voting members appointed by the NOAA Administrator as well as several NOAA employees who are nonvoting members. Voting members must be especially qualified in one or more disciplines relating to hydrographic data and services, vessel pilotage, port administration, coastal management, fisheries management, marine transportation, and other disciplines as determined appropriate by the NOAA Administrator. The NOAA Administrator is required to solicit nominations for panel membership at least once a year; voting members serve a 4-year term, and may be appointed to one additional term. The Director of the Office of Coast Survey serves as the designated federal officer. NOAA’s standards for hydrographic surveying are contained in a technical specifications document known as the Hydrographic Surveys Specifications and Deliverables. The document is updated annually by NOAA hydrographers and, according to NOAA officials, is also the standard on which many other hydrographic survey entities base their hydrographic surveying requirements. In addition, NOAA maintains a quality assurance program for all hydrographic survey data submitted by the private sector and NOAA hydrographers. The quality assurance program includes three main review procedures intended to ensure that hydrographic data submitted to NOAA meet quality standards: the Rapid Survey Assessment, Survey Acceptance Review, and Final Survey Review. See appendix I for additional information about NOAA’s data quality standards and review process. NOAA uses a three-step process to determine its hydrographic survey priorities. In addition, in an effort to improve its priority setting, NOAA is developing a model to better assess hydrographic risks to ships. According to NOAA’s standard operating procedure and NOAA officials, NOAA uses a three-step process to determine its hydrographic survey priorities. Under this process, NOAA (1) identifies the areas in greatest need of surveying, (2) evaluates resources, including funding and vessel availability, and (3) develops an annual hydrographic surveying plan, which identifies the resulting hydrographic survey priorities. The plan specifies the locations, vessels, and schedules for NOAA hydrographic survey projects and the locations and time frames for private sector hydrographic survey projects. (See fig. 4.) NOAA first identifies the areas the agency considers to be in the greatest need of a hydrographic survey, using an approach it developed in 1994 called NOAA Hydrographic Survey Priorities, according to NOAA’s standard operating procedure and NOAA officials. NOAA identifies areas of “navigational significance” based on depth, draft of ships, and potential for dangers to marine navigation. NOAA then determines which of these navigationally significant areas are in greatest need of surveying by considering (1) shipping tonnage and trends, (2) age and quality of surveys in the area, (3) seafloor depth, (4) potential for unknown dangers to navigation due to environmental or human influences, and (5) requests for surveys from stakeholders such as pilot associations and the U.S. Coast Guard, and requests received through NOAA’s regional navigation managers. Through this process, NOAA designates high-priority areas in any of four categories: Critical areas. Areas that NOAA identified in 1994 as experiencing such circumstances as high shipping traffic or hazardous material transport or having a significant number of survey requests from users. Emerging critical areas. Areas in the Gulf of Mexico and Alaska that NOAA identified after 1994 that met the critical area definition but that NOAA chose to designate in a separate category from the 1994 critical areas for tracking purposes. Resurvey areas. Areas that NOAA identified as requiring recurring surveys because of changes to seafloors, use by vessel traffic, or other reasons. Priority 1-5 areas. Areas that do not fall into any of the three categories above are subdivided into five priority areas based on the date of the most recent survey and the level of usage by vessels. Until 2012, according to NOAA’s standard operating procedure, NOAA used the results of its approach for identifying areas most in need of surveying to publish annual hydrographic survey prioritization reports—a component of the overall hydrographic surveying plan. However, NOAA officials said they found this approach increasingly outdated because it did not reflect changing ocean and shipping conditions or take advantage of available technology. These officials said they are in the process of developing a new methodology (described later in this report) to help identify areas that need surveys. According to NOAA officials, they have continued to update computerized mapping files and reports related to hydrographic survey priorities since 2012 but have not published new hydrographic survey prioritization reports. However, these officials said they will provide information to the public upon request. According to NOAA’s standard operating procedure and NOAA officials, once NOAA identifies its highest priority areas, the agency compares its priorities to those identified by external stakeholders through NOAA’s navigation managers and its Integrated Ocean and Coastal Mapping program. NOAA officials said this input helps them understand potential economic and safety issues, among other things, that may affect hydrographic survey priorities. NOAA officials said they look to find areas of intersection between areas identified through the NOAA Hydrographic Survey Priorities process and those compiled by NOAA’s navigation managers and external stakeholders. NOAA’s standard operating procedure states that when determining which areas to survey, NOAA generally gives precedence to survey areas identified through the NOAA Hydrographic Survey Priorities process, but stakeholder input may shape survey priorities in unusual cases, such as when hurricane-related requests indicate the need for an immediate resurvey. According to NOAA’s standard operating procedure and NOAA officials, NOAA estimates the amount of funds it expects to be available to conduct surveys and develops a preliminary survey plan that seeks to maximize in-house and contractor resources. Once funds are appropriated, NOAA modifies its preliminary plan to reflect the amounts available for NOAA fleet operations and survey contracting. NOAA also evaluates survey requirements and in-house and contractor ship availability and capability. As NOAA obligates funds for in-house surveys and for contracts, it refines and finalizes the actual amount of surveying to be conducted by both in- house and contractor hydrographers. According to NOAA’s standard operating procedure and NOAA officials, based on an evaluation of the identified hydrographic survey needs, available funding, and vessel availability and capability, NOAA develops a hydrographic surveying plan for the coming year. NOAA evaluates the mix of available NOAA and private sector vessels to meet the highest- ranked survey needs with available funding. NOAA also engages offices within NOAA to coordinate hydrographic survey ship schedules to accommodate other agency projects and plans. For example, NOAA officials said they may use hydrographic survey ships to accommodate the testing of new types of equipment, such as unmanned surface vehicles. Once the surveying plan is developed, it is submitted to the Chief of the Hydrographic Surveys Division for approval, according to NOAA’s standard operating procedure. When we began our review, NOAA officials told us they did not have written procedures documenting how the Hydrographic Surveys Division is to develop its annual hydrographic surveying plan. In response to our review, NOAA issued a standard operating procedure in September 2016 documenting how the division is to develop the plan. NOAA is developing a model intended to better assess hydrographic risks as part of its effort to identify areas most in need of hydrographic surveys—the first step in NOAA’s process for creating the hydrographic surveying plan. According to NOAA officials, the model is aimed at addressing several limitations they found with the agency’s existing approach for identifying areas most in need of surveys. For example, they said the existing approach does not account for such changes as: the emergence of new ports and subsequent changes in waterway traffic patterns; seafloor changes from weather and oceanic processes, and the resulting need for some areas to be surveyed more often than others; and sizes and capabilities of ships, with many of them having deeper drafts since NOAA developed its plan in 1994. In addition, NOAA officials noted that the existing approach has focused on large container ships and oil tankers and not the many smaller vessels (e.g., fishing vessels and recreational boats) that also rely on NOAA hydrographic survey data to navigate safely. According to NOAA documents, the new model—which NOAA refers to as a “hydrographic health” model—will help NOAA identify survey needs by taking advantage of new technologies and more precise information about weather and oceanic processes. For example, agency officials said that with the advent of a Global Positioning System-based technology known as the Automatic Identification System, NOAA has data on the actual paths of vessels equipped with this technology, including when and where vessels have travelled as well as their length, width, and draft. The new model also analyzes information that is similar to what NOAA currently uses, such as (1) areas of shallow seafloor depth, (2) unsurveyed areas, (3) known or reported discrepancies on the nautical chart for an area, (4) reported accidents, (5) stakeholder requests, and (6) established national priorities. NOAA officials said they completed a test of the new hydrographic health model in 2016 for coastal waters in the southeastern United States— including coastal Alabama, Florida, and Georgia—and solicited feedback on the model from internal stakeholders. NOAA also presented the model at an international hydrographic conference in May 2016 and began using the model in the second quarter of fiscal year 2017. NOAA officials said the agency is preparing to submit a paper describing this model to an international hydrographic journal for peer review in the second quarter of fiscal year 2018. NOAA officials said they will incorporate the peer review feedback into the model in the third quarter of fiscal year 2018. NOAA also plans to release periodic reports describing the state of the hydrographic health of the nation’s waters after the model is fully implemented, according to the standard operating procedure. NOAA prepares an annual report that compares the cost of collecting its own hydrographic survey data to the cost of procuring such data from the private sector. According to NOAA’s standard operating procedure for conducting this cost analysis, the purpose of the analysis is to track and report the full cost of the hydrographic survey program, detailing costs for all activities that directly or indirectly contribute to the program. Specifically, NOAA’s standard operating procedure for preparing the annual cost comparison report states that the report should include, by fiscal year, all costs that directly or indirectly contribute to conducting hydrographic surveys, regardless of funding sources. According to NOAA’s standard operating procedure, to create the report, NOAA annually obtains data on survey costs for the previous fiscal year from the various NOAA offices involved in collecting hydrographic survey data. These offices collect cost data from staffing and financial data systems and enter the information into a spreadsheet, according to NOAA officials and NOAA’s standard operating procedure. NOAA documentation indicates these data include direct costs NOAA incurs to collect hydrographic data using its own ships; these direct costs include equipment and maintenance, labor, and fuel. In addition, according to NOAA officials and NOAA’s standard operating procedure, NOAA obtains data on indirect costs, such as administrative costs apportioned to the hydrographic survey program and amounts paid to the private sector for conducting surveys. In 2005, NOAA began reporting hydrographic survey costs in an annual cost comparison report in response to a 2003 recommendation from the Department of Commerce Office of Inspector General that NOAA track and report the full costs of its survey program. In addition, in 2005, the Hydrographic Services Review Panel recommended that NOAA use actual costs rather than estimates and “reasonably follow” Office of Management and Budget Circular A-76 guidelines to calculate the cost comparison; these guidelines state, among other things, that capital assets should be depreciated in cost estimates. Based on our review of NOAA’s cost comparison reports for fiscal years 2006 through 2016, NOAA did not in all instances report complete or accurate cost data for its hydrographic survey program. Specifically, NOAA did not include the complete cost of the hydrographic survey program for the following activities: Vessel acquisition. NOAA did not include the 2012 acquisition cost of a NOAA survey vessel (the Hassler) in its cost comparison reports from fiscal years 2012 through 2016. According to NOAA documentation, this vessel cost $24.3 million, and NOAA officials agreed that they should include the acquisition cost of NOAA vessels in cost comparison reports and that such costs should be depreciated. NOAA officials said they have not included such costs in annual cost comparison reports because depreciation costs are tracked in NOAA’s property management system but not in NOAA’s budget tracking system. These officials said they are uncertain whether these two systems can be linked because they are separate databases managed by different NOAA offices. Major vessel maintenance. NOAA did not include the cost of major maintenance performed in 2010 on the hydrographic survey vessel Rainier in its cost comparison reports from fiscal years 2010 through 2016. According to NOAA officials, the agency spent $13.7 million in support of maintenance for the Rainier. NOAA officials acknowledged that such costs should be reflected in NOAA’s cost comparison reports and that such costs should be depreciated. NOAA officials explained that they allocate annual maintenance and repair costs associated with the hydrographic survey program according to the number of days a ship is at sea conducting surveys. In this case, they said because the Rainier was in port the entire year undergoing repairs, they did not include these capital improvement costs in the cost comparison report. Contract administration for private sector hydrographers. NOAA did not include in its cost comparison reports for fiscal years 2006 through 2016 contract administration costs for managing private sector hydrographers working under contract to the agency. NOAA’s standard operating procedure for conducting the annual cost analysis specifies that the agency should include the costs associated with contract management and monitoring. NOAA officials said these costs were not included in the reports in part because they did not have the software to track contract administration costs. NOAA officials acknowledged that they should include such costs in the cost comparison report. In addition to incomplete costs for some activities, we also noted that NOAA did not accurately report certain costs of the hydrographic survey program in the year to which those costs should be assigned. Equipment, repair, and maintenance costs. NOAA includes equipment, repair, and maintenance costs in the hydrographic survey cost comparison report for the year in which such costs are reported in NOAA’s financial system. However, as with major vessel maintenance costs previously discussed, NOAA officials acknowledged that these costs should be depreciated. As a result of this practice, NOAA’s hydrographic survey costs may appear artificially high during years in which NOAA incurs large equipment, repair, and maintenance costs. NOAA officials said they recognize that reporting equipment, repair, and maintenance costs in the year they are incurred does not accurately represent agency costs. Cost and performance data for survey work conducted by the private sector. NOAA does not track cost data in a way that allows the agency to link the cost for private sector surveys to the amount of survey work conducted. For example, in the cost comparison report for fiscal year 2014, NOAA included funds that were obligated for two contractors to conduct survey work, but the report showed that these contractors did not survey any nautical miles during that year. NOAA officials explained that they obligated funds in fiscal year 2014 to pay for the contract survey work, but the contractors did not begin the work until fiscal year 2015. These officials stated that they record contractor costs in the year in which the obligation occurs, and they record the miles surveyed in the year in which the surveying occurs. However, the 2014 cost per square nautical mile may appear artificially high because costs were recorded without including corresponding mileage surveyed. In contrast, the 2015 cost per square nautical mile may appear artificially low because survey miles were recorded, but the costs for conducting those surveys were not included in the 2015 report. NOAA officials acknowledged that their current method for tracking contractor costs and work performed needs improvement. They explained that the data inaccuracies arise in part from NOAA’s current process for tracking contractor cost and performance through manual entry of data into multiple spreadsheets. Furthermore, we found that NOAA uses a single measure—cost per square nautical mile surveyed—to compare its own survey costs to those of its contractors. However, in 2005, the Hydrographic Services Review Panel concluded that a single cost measure, such as the cost per square nautical mile, should not be used as the primary factor to determine the relative cost-effectiveness of NOAA and private sector efforts to collect hydrographic data. The panel recommended that NOAA consider a wider variety of measures to help provide additional insight. NOAA officials acknowledged that the cost per square nautical mile was not a comprehensive measure of cost-effectiveness and that having additional measures would improve the accuracy of cost comparisons to account for factors such as region and water depth. As a result of the concerns we identified, during our review, NOAA officials began identifying actions they would take to improve NOAA’s cost data. In some instances, officials identified specific steps and associated time frames to carry out these actions. For example, NOAA officials said they started using new project management software in fiscal year 2017 to help track contract administration costs for inclusion in future cost comparison reports. In addition, to allow NOAA to better link the costs for private sector surveys to the amount of survey work conducted, NOAA officials said they plan to develop a new database by March 2018; this database would help eliminate the need for manual data entry and allow NOAA to track survey cost and performance data for various time frames and regions. To improve NOAA’s ability to compare its own survey costs to those of contractors, NOAA officials said they were in the process of developing additional survey measures beyond cost per square nautical mile that could include a new “survey complexity rating” designed to account for factors such as region and water depth. Officials said they expect to have these additional measures in place by October 2018. However, NOAA officials could not yet identify the steps or associated time frames for carrying out other actions to improve the completeness and accuracy of cost data. For example, to help improve NOAA’s process for tracking depreciation costs of capital assets—such as vessel acquisition or equipment, repair, and maintenance—NOAA officials said they planned to implement an improved process in fiscal year 2019 but did not identify the specific steps to implement this process. In addition, to account for ships that are in port undergoing major maintenance, NOAA officials said they plan to develop a tracking system to help ensure such maintenance costs are included in NOAA’s cost comparison reports, but they did not provide additional specific details or identify when they intend to implement such a system. For these recently identified actions, NOAA officials explained that it was uncertain how NOAA would proceed because identifying and implementing certain steps requires the coordination of multiple offices within NOAA such as the Office of Coast Survey, Office of Marine and Aviation Operations, and Office of the Chief Administrative Officer. Without ensuring that its efforts to improve its cost comparison reports include actions to fully track capital asset depreciation costs and account for ships in port undergoing major maintenance, NOAA may be unable to prepare cost comparison reports that reflect the full cost of its hydrographic survey program, as called for in the agency’s standard operating procedure. NOAA has taken steps aimed at increasing private sector involvement in its hydrographic data collection program, such as streamlining its contracting process and increasing communication with contractors. However, NOAA has not developed a strategy for expanding its use of the private sector as required by a 2009 law. According to NOAA officials, NOAA has taken several steps to increase private sector involvement in its hydrographic data collection program. For example, NOAA developed a centralized process for competing and awarding contracts in 2003, which NOAA officials said reduced administrative costs and contract award time. Before this change, NOAA awarded contracts to individual contractors at the regional level, which required expending resources to process each individual contract. As a result of implementing a centralized process for competing and awarding contracts, NOAA officials said they increased the number of private sector firms under contract, from five during the 2003-2008 contract period to eight during the current 2014-2019 contract period. However, NOAA officials said they have not awarded task orders for surveys to all eight private sector firms in the same fiscal year because of NOAA’s appropriation, which has remained mostly flat during the current contract period. NOAA also took steps to increase communication with contractors, according to NOAA officials. For example, starting in 2005, NOAA has invited hydrographic survey contractors to its annual field procedures workshop, which brings together officials from NOAA’s headquarters, field offices, and quality assurance processing branches, among others. The purpose of the workshop is to discuss updates to hydrographic survey requirements and new hydrographic survey technologies. Also, since 2005, according to NOAA officials, contracting officer representatives have improved their communication with contractors through the various stages of the contract and survey activities by answering contractors’ questions regarding project requirements, expected deliverables, data processing, and unanticipated challenges that may occur when conducting surveys. In addition, NOAA officials said that in 2010, the agency implemented procedures for obtaining contractor input on changes to its hydrographic survey technical specifications document, the Hydrographic Surveys Specifications and Deliverables. The document is updated annually, and contractors are asked to provide input through their respective contracting officer representatives. Staff review input to determine whether to include the recommended action in the annual technical specifications update. According to NOAA officials, participants discuss recommended changes at meetings held during the annual field procedures workshop. NOAA has not developed a strategy for expanding its use of the private sector in its hydrographic survey data collection program, as required by law. Specifically, the Ocean and Coastal Mapping Integration Act required the NOAA Administrator to transmit a report to relevant congressional committees by July 28, 2009, that described the agency’s strategy for expanding contracting with the private sector to minimize duplication and take maximum advantage of private sector capabilities in fulfilling NOAA’s mapping and charting responsibilities. NOAA officials could not provide us any documentation indicating what information the agency provided to Congress in response to this statutory requirement. In 2010, NOAA issued its Ocean and Coastal Mapping and Contracting Policy, which states that the policy was developed in response to the act. However, rather than describing a strategy for expanding contracting with the private sector, as required by the 2009 law, the policy states that it is NOAA’s intent to contract with the private sector for ocean and coastal mapping services when the agency determines it is cost- effective to do so and funds are available. NOAA officials acknowledged that the contracting policy does not meet the statutory requirement that the agency develop a strategy for expanding contracting with the private sector. NOAA officials said the agency is limited in its ability to expand private sector contracting because of congressional direction on the use of the agency’s appropriations. Specifically, NOAA’s hydrographic survey program is supported by two separate funding elements, known as “Programs, Projects, and Activities” (PPA), within NOAA’s Operations, Research, and Facilities appropriation account. One PPA is for private sector hydrographic data collection, and the other is for general operations, maintenance, and repair of NOAA’s entire fleet of ships, including the hydrographic survey vessels. According to NOAA officials, the agency has limited authority to reprogram funds between these two PPAs without congressional notification and agreement that such reprogramming is warranted. To propose a reprogramming of funds, NOAA officials said they would need to evaluate the prioritization of all fleet missions. In addition, NOAA officials said they would have to continue to fund fixed operational costs and agency expenses for NOAA’s entire fleet even if operations funds were reprogrammed to hydrographic data acquisition contracts. NOAA officials said the agency intends to develop a strategy describing how it plans to expand private sector involvement in the hydrographic data collection program—which the Ocean and Coastal Mapping Integration Act required the agency to submit in a report to relevant congressional committees in 2009—and it will use the 2010 Ocean and Coastal Mapping and Contracting Policy to guide this effort. These officials said the agency must first implement its planned improvements in collecting both NOAA and private sector hydrographic survey costs; once NOAA has a more accurate basis on which to compare costs, the agency will assess the extent to which it can expand its use of the private sector and develop a strategy accordingly. These officials said that if their analysis indicates the agency should expand its use of the private sector beyond what is currently possible given agency appropriations, the agency will request changes to its appropriations to allow it more flexibility in expanding its use of the private sector. However, NOAA officials did not provide specific information about how they intend to develop the strategy, what elements it will contain, or when it will be completed. Without developing such a strategy, NOAA may have difficulty minimizing duplication and taking maximum advantage of private sector capabilities in fulfilling NOAA’s mapping and charting responsibilities. Recognizing the importance of nautical charts to help ensure safe passage of people and goods through the nation’s waterways, NOAA has taken steps to improve its ability to set priorities for collecting hydrographic data. NOAA also prepares annual reports that compare the costs of NOAA conducting its own hydrographic surveys to the costs of contracting for such surveys. NOAA’s standard operating procedure requires the agency to track and report all costs for the hydrographic survey program. However, NOAA has not determined how it will track depreciation costs of capital assets or established time frames to improve its tracking of major maintenance costs for vessels. Without ensuring that its efforts to improve its cost comparison reports include actions to fully track capital asset depreciation costs and account for ships in port undergoing major maintenance, NOAA may be unable to prepare cost comparison reports that reflect the full cost of its hydrographic survey program, as called for by the agency’s standard operating procedure. In addition, NOAA was required by law to develop a strategy for expanding its use of the private sector in its hydrographic survey program, but it has not done so and has not provided specific information on how and when it will. Without such a strategy, NOAA may have difficulty minimizing duplication and taking maximum advantage of private sector capabilities in fulfilling NOAA’s mapping and charting responsibilities. We recommend that the Secretary of Commerce direct the NOAA Administrator to take the following two actions: ensure that NOAA’s efforts to improve its cost comparison reports include actions to fully track capital asset depreciation costs and account for ships in port undergoing major maintenance in accordance with its standard operating procedure, and develop a strategy for expanding NOAA’s use of the private sector in its hydrographic survey program, as required by law. We provided a draft of this report to the Department of Commerce for review and comment. NOAA, responding on behalf of Commerce, stated in its written comments (reproduced in app. II) that it agreed with our two recommendations. Regarding our recommendation related to improving NOAA’s cost comparison reports, NOAA agreed that its cost estimates should include the depreciation costs of new vessels once they are operational and stated that it will work to obtain an accurate depreciation schedule. NOAA also stated that it will take steps to improve its tracking and reporting of depreciation costs for equipment and repair and maintenance, including its accounting for ships in port undergoing major maintenance. Regarding our recommendation that NOAA develop a strategy for expanding its use of the private sector in hydrographic surveying, NOAA stated that the agency will develop such a strategy once it improves its approach for comparing its hydrographic survey costs to those of the private sector. NOAA also provided one technical comment, which we incorporated. We are sending copies of this report to the appropriate congressional committees, the Secretary of Commerce, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions regarding this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to the report are listed in appendix III. The National Oceanic and Atmospheric Administration (NOAA) has issued standards—known as the Hydrographic Surveys Specifications and Deliverables (HSSD)—for all hydrographic survey data collected by both private sector contractors and NOAA staff. NOAA maintains a quality assurance program for these data that includes three main review procedures (described below). The HSSD standards for conducting hydrographic surveys are based in part on the International Hydrographic Organization’s Standards for Hydrographic Surveys. These standards pertain to hydrographic surveys that are intended for harbors, harbor approach channels, inland navigation channels, and coastal areas of high commercial traffic density, and they generally pertain to shallower areas less than 100 meters in depth. According to NOAA officials, the HSSD has been reviewed annually since its initial publication in 2000, and NOAA has procedures in place to obtain suggestions from private sector contractors regarding changes to the HSSD. For example, at its annual field procedures workshop, NOAA conducts a session on data quality review standards and practices, and it solicits recommendations for changes to the HSSD from both NOAA staff and private sector hydrographers. According to NOAA officials, contractors submitted fewer than 10 recommendations in 2016 but submitted more than 30 recommendations in 2017. All recommended changes to the HSSD are reviewed by the Office of Coast Survey’s Hydrographic Surveys Division, Operations Branch. Recommendations are then forwarded to the Office of Coast Survey Board of Hydrographers for review, and the survey board submits its recommendations to the Chief of the Hydrographic Surveys Division for final approval. NOAA’s hydrographers test the feasibility of many significant changes to the HSSD before they are put into practice by private sector hydrographers. In June 2016, NOAA approved a new position specifically to oversee and coordinate efforts related to hydrographic specifications, recommended procedures, and training. According to NOAA officials, they intend to fill the position in August 2017. NOAA officials said the HSSD is also the standard on which many other international hydrographic entities base their hydrographic surveying requirements and is widely utilized by the hydrographic mapping community. According to NOAA officials, examples of uses of HSSD are: The hydrographic specifications section of the National Society of Professional Surveyors/Hydrographic Society of America certified hydrographer exam is based in part on the HSSD. The University Oceanographic Laboratory System Multibeam Advisory Committee references the HSSD in its specifications for multibeam sonar calibrations. The only two U.S. universities with graduate programs in hydrography—the University of New Hampshire and the University of Southern Mississippi—rely on the HSSD as part of their programs. In addition, NOAA officials said the Office of Coast Survey has worked with different entities to help ensure that data collected by these entities meet HSSD specifications so that the data can be used on NOAA’s nautical charts. For example, officials said the office has worked with: the New Jersey Department of Transportation since 2014 on survey data the department is collecting for all New Jersey coastal waters; Coastal Carolina University since 2015 on survey data the university is collecting for the Bureau of Ocean Energy Management, an agency within the Department of the Interior; and the University of South Florida since 2016 on survey data the university is collecting for a significant portion of western Florida’s coastal waters. NOAA’s quality assurance program includes three main review procedures intended to ensure that hydrographic data submitted to NOAA meet quality standards: the Rapid Survey Assessment, Survey Acceptance Review, and Final Survey Review. Rapid Survey Assessment. NOAA’s hydrographic survey data processing branches located in Seattle, Washington, and Norfolk, Virginia, are responsible for initiating a hydrographic survey data “rapid survey assessment” within 5 working days of survey data being delivered to NOAA by private sector contractors and NOAA staff. According to NOAA documentation, the assessment, which should be completed within 2 working days, is intended to improve data quality by quickly identifying significant deficiencies in hydrographic survey data products. The assessment helps ensure the survey data meet HSSD technical requirements and project-specific instructions that are issued at the start of each survey project. If the assessment finds significant deficiencies, NOAA’s assessment team may make corrections itself or may return the survey to the hydrographer for rework and resubmission. The hydrographic data processing branches take several factors into consideration when deciding whether to return a survey for rework, such as whether the hydrographers are capable of fixing the error, whether there is value in returning a survey for the purpose of educating the hydrographers to prevent future similar errors, and whether it is faster and more efficient for the processing branch to make corrections. According to NOAA documentation, even if no deficiencies are found, passing the data through this initial assessment does not preclude the processing branch from returning the survey to the field hydrographers for rework and resubmission later in the quality assurance process if significant deficiencies are subsequently found. Survey Acceptance Review. The survey acceptance review is a detailed evaluation and acceptance of hydrographic survey data conducted by the data processing branches in Seattle, Washington, and Norfolk, Virginia. According to NOAA documentation, the survey acceptance review process includes: (1) accepting the survey data from the field hydrographers, (2) evaluating the data and products delivered by hydrographers for deficiencies and deviations from the guidance documents, (3) conducting an internal review of the survey acceptance review process to validate that process, and (4) outlining the findings from the survey acceptance review process and transferring responsibility for the integrity and maintenance of the survey data from the field hydrographer to the processing branch. The survey acceptance review involves several compliance checks and is intended to confirm that the survey data are accurate and to highlight the strengths and weaknesses of the data. A key element of the survey acceptance review is performing quality assurance checks on the survey data to ensure the survey was performed to the standards required in guidance documents, including the HSSD, NOAA’s hydrographic field procedures manual, and any hydrographic survey project-specific instructions. Upon completion of the survey acceptance review, an internal review is conducted to verify that the survey acceptance review was completed in accordance with relevant standard operating procedures, and that any issues outlined in the review documentation are consistently delineated. After the internal review is completed and approved, the completed documentation is forwarded to the Processing Branch Chief for review. The final output of the review process includes an acceptance letter to the Hydrographic Surveys Division Chief through the Processing Branch Chief outlining any findings from the review and releasing the field hydrographers from further responsibility for the data. Figure 5 illustrates the survey acceptance review process. Final Survey Review. The NOAA contracting officer’s representative is responsible for the final quality assurance review for each hydrographic survey project. According to NOAA officials, this is a critical stage, as the contracting officer’s representative has been involved at every stage of the survey, from planning and technical evaluation to survey monitoring, including at least one inspection visit with the contractor during the survey time frame. The contracting officer’s representative is the primary point of contact when the contractor seeks guidance to resolve technical issues. During the final review, the contracting officer’s representative reviews the survey to ensure it is complete—this is the last stage of quality assurance review before the data are archived and made available to the public. In addition to the individual named above, Steve Gaty (Assistant Director), Leo Acosta (Analyst-in-Charge), Martin (Greg) Campbell, Patricia Farrell Donahue, Timothy Guinane, Benjamin Licht, J. Lawrence Malenich, Ty Mitchell, Guisseli Reyes-Turnell, Danny Royer, Jeanette Soares, and Arvin Wu made key contributions to this report. | NOAA is responsible for collecting hydrographic data—that is, data on the depth and bottom configuration of water bodies—to help create nautical charts. NOAA collects data using its fleet and also procures data from the private sector. The Hydrographic Services Improvement Act of 1998 requires NOAA to acquire such data from the private sector “to the greatest extent practicable and cost-effective.” GAO was asked to review NOAA efforts to collect hydrographic data. This report examines (1) how NOAA determines its hydrographic survey priorities, (2) NOAA's efforts to compare the costs of collecting its own survey data to the costs of procuring such data from the private sector, and (3) the extent to which NOAA has developed a strategy for private sector involvement in hydrographic data collection. GAO analyzed relevant laws and agency procedures, NOAA cost comparison reports from fiscal years 2006 through 2016, and other NOAA information, such as hydrographic survey program priorities. GAO also interviewed NOAA officials and the eight survey companies that currently have contracts with NOAA. The Department of Commerce's National Oceanic and Atmospheric Administration (NOAA) uses a three-step process to determine its hydrographic survey priorities, according to agency documents and officials. NOAA first identifies areas in greatest need of surveying by analyzing data such as seafloor depth, shipping tonnage, and the time elapsed since the most recent survey. Second, the agency evaluates the availability of funding resources as well as the availability and capability of NOAA and private sector hydrographic survey vessels. Third, NOAA develops an annual hydrographic surveying plan that identifies survey priorities. To help inform the first step in this process, NOAA is developing a model to take advantage of new mapping technologies. NOAA prepares an annual report comparing the cost of collecting its own hydrographic survey data to the cost of procuring data from the private sector but does not include all costs in its cost comparisons. Under its standard operating procedure, NOAA is to report the full cost of the hydrographic survey program, including equipment, maintenance, and administrative costs. GAO's review of NOAA's cost comparison reports from fiscal years 2006 through 2016, however, found that NOAA did not in all instances report complete or accurate cost data. For example, NOAA did not include the acquisition of a $24 million vessel in 2012, and in some cases it did not report certain costs in the year to which those costs should be assigned. NOAA officials said they recognized the need to improve the agency's tracking of costs, and they identified actions they intend to take but did not always provide information about specific steps to carry out these actions or associated time frames. For example, NOAA officials said they planned to implement an improved process in fiscal year 2019 for tracking the costs of capital assets such as vessels but did not identify specific steps to do so. They also said they plan to develop a system to better track maintenance costs but did not provide specific details or a time frame to do this. Without ensuring that its efforts to improve its cost comparison reports include actions to fully track asset and maintenance costs, NOAA may be unable to prepare cost comparison reports that reflect the full cost of its survey program, as specified in the agency's standard operating procedure. NOAA has taken steps to increase private sector involvement in its hydrographic data collection program but has not developed a strategy for expanding such involvement as required by law. For example, NOAA moved to a centralized process for competing and awarding contracts, which NOAA officials said reduced administrative costs and contract award time and allowed NOAA to increase the number of private sector firms under contract from five to eight. However, NOAA did not develop a strategy for expanding its use of the private sector to minimize duplication and take maximum advantage of private sector capabilities, as required by law. NOAA officials said the agency intends to develop such a strategy but must first make improvements in its approach to comparing its own hydrographic survey costs to those of the private sector. However, NOAA officials did not provide specific information about how they intend to develop the strategy, what elements it will contain, or when it will be completed. Without developing such a strategy, NOAA may have difficulty minimizing duplication and taking advantage of private sector capabilities. GAO recommends that NOAA (1) ensure that its efforts to improve its cost comparison reports include actions to fully track asset and maintenance costs and (2) develop a strategy for expanding private sector involvement in the hydrographic survey program. NOAA agreed with GAO's recommendations. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Collecting information is one way that federal agencies carry out their missions. For example, IRS needs to collect information from taxpayers and their employers to know the correct amount of taxes owed. The U.S. Census Bureau collects information used to apportion congressional representation and for many other purposes. When new circumstances or needs arise, agencies may need to collect new information. We recognize, therefore, that a large portion of federal paperwork is necessary and serves a useful purpose. Nonetheless, besides ensuring that information collections have public benefit and utility, federal agencies are required by the PRA to minimize the paperwork burden that they impose. Among the provisions of the act aimed at this purpose are requirements for the review of information collections by OMB and by agency CIOs. Under PRA, federal agencies may not conduct or sponsor the collection of information unless approved by OMB. OMB is required to determine that the agency collection of information is necessary for the proper performance of the functions of the agency, including whether the information will have practical utility. Consistent with the act’s requirements, OMB has established a process to review all proposals by executive branch agencies (including independent regulatory agencies) to collect information from 10 or more persons, whether the collections are voluntary or mandatory. In addition, the act as amended in 1995 requires every agency to establish a process under the official responsible for the act’s implementation (now the agency’s CIO) to review program offices’ proposed collections. This official is to be sufficiently independent of program responsibility to evaluate fairly whether information collections should be approved. Under the law, the CIO is to review each collection of information before submission to OMB, including reviewing the program office’s evaluation of the need for the collection and its plan for the efficient and effective management and use of the information to be collected, including necessary resources. As part of that review, the agency CIO must ensure that each information collection instrument (form, survey, or questionnaire) complies with the act. The CIO is also to certify that the collection meets 10 standards (see table 1) and to provide support for these certifications. The paperwork clearance process currently takes place in two stages. The first stage is CIO review. During this review, the agency is to publish a notice of the collection in the Federal Register. The public must be given a 60-day period in which to submit comments, and the agency is to otherwise consult with interested or affected parties about the proposed collection. At the conclusion of the agency review, the CIO submits the proposal to OMB for review. The agency submissions to OMB typically include a copy of the data collection instrument (e.g., a form or survey) and an OMB submission form providing information (with supporting documentation) about the proposed information collection, including why the collection is necessary, whether it is new or an extension of a currently approved collection, whether it is voluntary or mandatory, and the estimated burden hours. Included in the submission is the certification by the CIO or the CIO’s designee that the collection satisfies the 10 standards. The OMB review is the second stage in the clearance process. This review may involve consultation between OMB and agency staff. During the review, a second notice is published in the Federal Register, this time with a 30-day period for soliciting public comment. At the end of this period, OMB makes its decision and informs the agency. OMB maintains on its Web site a list of all approved collections and their currently valid control numbers, including the form numbers approved under each collection. The 1995 PRA amendments also require OMB to set specific goals for reducing burden from the level it had reached in 1995: at least a 10 percent reduction in the governmentwide burden-hour estimate for each of fiscal years 1996 and 1997, a 5 percent governmentwide burden reduction goal in each of the next 4 fiscal years, and annual agency goals that reduce burden to the “maximum practicable opportunity.” At the end of fiscal year 1995, federal agencies estimated that their information collections imposed about 7 billion burden hours on the public. Thus, for these reduction goals to be met, the burden-hour estimate would have had to decrease by about 35 percent, to about 4.6 billion hours, by September 30, 2001. In fact, on that date, the federal paperwork estimate had increased by about 9 percent, to 7.6 billion burden hours. As of March 2006, OMB’s estimate for governmentwide burden is about 10.5 billion hours— about 2.5 billion hours more than the estimate of 7.971 billion hours at the end of fiscal year 2004. Over the years, we have reported on the implementation of PRA many times. In a succession of reports and testimonies, we noted that federal paperwork burden estimates generally continued to increase, rather than decrease as envisioned by the burden reduction goals in PRA. Further, we reported that some burden reduction claims were overstated. For example, although some reported paperwork reductions reflected substantive program changes, others were revisions to agencies’ previous burden estimates and, therefore, would have no effect on the paperwork burden felt by the public. In our previous work, we also repeatedly pointed out ways that OMB and agencies could do more to ensure compliance with PRA. In particular, we have often recommended that OMB and agencies take actions to improve the paperwork clearance process. Governmentwide, agency CIOs generally reviewed information collections before they were submitted to OMB and certified that the 10 standards in the act were met. However, in our 12 case studies, CIOs provided these certifications despite often missing or partial support from the program offices sponsoring the collections. Further, although the law requires CIOs to provide support for certifications, agency files contained little evidence that CIO reviewers had made efforts to get program offices to improve the support that they offered. Numerous factors have contributed to these conditions, including a lack of management support and weaknesses in OMB guidance. Without appropriate support and public consultation, agencies have reduced assurance that collections satisfy the standards in the act. Among the PRA provisions intended to help achieve the goals of minimizing burden while maximizing utility are the requirements for CIO review and certification of information collections. The 1995 amendments required agencies to establish centralized processes for reviewing proposed information collections within the CIO’s office. Among other things, the CIO’s office is to certify, for each collection, that the 10 standards in the act have been met, and the CIO is to provide a record supporting these certifications. The four agencies in our review all had written directives that implemented the review requirements in the act, including the requirement for CIOs to certify that the 10 standards in the act were met. The estimated certification rate ranged from 100 percent at IRS and HUD to 92 percent at VA. Governmentwide, agencies certified that the act’s 10 standards had been met on an estimated 98 percent of the 8,211 collections. However, in the 12 case studies that we reviewed, this CIO certification occurred despite a lack of rigorous support that all standards were met. Specifically, the support for certification was missing or partial on 65 percent (66 of 101) of the certifications. Table 4 shows the result of our analysis of the case studies. For example, under the act, CIOs are required to certify that each information collection is not unnecessarily duplicative. According to OMB instructions, agencies are to (1) describe efforts to identify duplication and (2) show specifically why any similar information already available cannot be used or modified for the purpose described. Program reviews were conducted to identify potential areas of duplication; however, none were found to exist. There is no known Department or Agency which maintains the necessary information, nor is it available from other sources within our Department. is a new, nationwide service that does not duplicate any single existing service that attempts to match employers with providers who refer job candidates with disabilities. While similar job-referral services exist at the state level, and some nation-wide disability organizations offer similar services to people with certain disabilities, we are not aware of any existing survey that would duplicate the scope or content of the proposed data collection. Furthermore, because this information collection involves only providers and employers interested in participating in the EARN service, and because this is a new service, a duplicate data set does not exist. While this example shows that the agency attempted to identify duplicative sources, it does not discuss why information from state and other disability organizations could not be aggregated and used, at least in part, to satisfy the needs of this collection. We have attempted to eliminate duplication within the agency wherever possible. This assertion provides no information on what efforts were made to identify duplication or perspective on why similar information, if any, could not be used. Further, the files contained no evidence that the CIO reviewers challenged the adequacy of this support or provided support of their own to justify their certification. A second example is provided by the standard requiring each information collection to reduce burden on the public, including small entities, to the extent practicable and appropriate. OMB guidance emphasizes that agencies are to demonstrate that they have taken every reasonable step to ensure that the collection of information is the least burdensome necessary for the proper performance of agency functions. In addition, OMB instructions and guidance direct agencies to provide specific information and justifications: (1) estimates of the hour and cost burden of the collections and (2) justifications for any collection that requires respondents to report more often than quarterly, respond in fewer than 30 days, or provide more than an original and two copies of documentation. With regard to small entities, OMB guidance states that the standard emphasizes such entities because these often have limited resources to comply with information collections. The act cites various techniques for reducing burden on these small entities, and the guidance includes techniques that might be used to simplify requirements for small entities, such as asking fewer questions, taking smaller samples than for larger entities, and requiring small entities to provide information less frequently. Our review of the case examples found that for the first part of the certification, which focuses on reducing burden on the public, the files generally contained the specific information and justifications called for in the guidance. However, none of the case examples contained support that addressed how the agency ensured that the collection was the least burdensome necessary. According to agency CIO officials, the primary cause for this absence of support is that OMB instructions and guidance do not direct agencies to provide this information explicitly as part of the approval package. For the part of the certification that focuses on small businesses, our governmentwide sample included examples of various agency activities that are consistent with this standard. For instance, Labor officials exempted 6 million small businesses from filing an annual report; telephoned small businesses and other small entities to assist them in completing a questionnaire; reduced the number of small businesses surveyed; and scheduled fewer compliance evaluations on small contractors. For four of our case studies, however, complete information that would support certification of this part of the standard was not available. Seven of the 12 case studies involved collections that were reported to impact businesses or other for-profit entities, but for 4 of the 7, the files did not explain either ● why small businesses were not affected or ● even though such businesses were affected, that burden could or could not be reduced. Referring to methods used to minimize burden on small business, the files included statements such as “not applicable.” These statements do not inform the reviewer whether there was an effort made to reduce burden on small entities or not. When we asked agencies about these four cases, they indicated that the collections did, in fact, affect small business. OMB’s instructions to agencies on this part of the certification require agencies to describe any methods used to reduce burden only if the collection of information has a “significant economic impact on a substantial number of small entities.” This does not appropriately reflect the act’s requirements concerning small business: the act requires that the CIO certify that the information collection reduces burden on small entities in general, to the extent practical and appropriate, and provides no thresholds for the level of economic impact or the number of small entities affected. OMB officials acknowledged that their instruction is an “artifact” from a previous form and more properly focuses on rulemaking rather than the information collection process. The lack of support for these certifications appears to be influenced by a variety of factors. In some cases, as described above, OMB guidance and instructions are not comprehensive or entirely accurate. In the case of the duplication standard specifically, IRS officials said that the agency does not need to further justify that its collections are not duplicative because (1) tax data are not collected by other agencies, so there is no need for the agency to contact them about proposed collections, and (2) IRS has an effective internal process for coordinating proposed forms among the agency’s various organizations that may have similar information. Nonetheless, the law and instructions require support for these certifications, which was not provided. In addition, agency reviewers told us that management assigns a relatively low priority and few resources to reviewing information collections. Further, program offices have little knowledge of and appreciation for the requirements of the PRA. As a result of these conditions and a lack of detailed program knowledge, reviewers often have insufficient leverage with program offices to encourage them to improve their justifications. When support for the PRA certifications is missing or inadequate, OMB, the agency, and the public have reduced assurance that the standards in the act, such as those on avoiding duplication and minimizing burden, have been consistently met. IRS and EPA have supplemented the standard PRA review process with additional processes aimed at reducing burden while maximizing utility. These agencies’ missions require them both to deal extensively with information collections, and their management has made reduction of burden a priority. In January 2002, the IRS Commissioner established an Office of Taxpayer Burden Reduction, which includes both permanently assigned staff and staff temporarily detailed from program offices that are responsible for particular information collections. This office chooses a few forms each year that are judged to have the greatest potential for burden reduction (these forms have already been reviewed and approved through the CIO process). The office evaluates and prioritizes burden reduction initiatives by ● determining the number of taxpayers impacted; ● quantifying the total time and out-of-pocket savings for taxpayers; ● evaluating any adverse impact on IRS’s voluntary compliance ● assessing the feasibility of the initiative, given IRS resource ● tying the initiative into IRS objectives. Once the forms are chosen, the office performs highly detailed, in- depth analyses, including extensive outreach to the public affected, the users of the information within and outside the agency, and other stakeholders. This analysis includes an examination of the need for each data element requested. In addition, the office thoroughly reviews form design. The office’s Director heads a Taxpayer Burden Reduction Council, which serves as a forum for achieving taxpayer burden reduction throughout IRS. IRS reports that as many as 100 staff across IRS and other agencies can be involved in burden reduction initiatives, including other federal agencies, state agencies, tax practitioner groups, taxpayer advocacy panels, and groups representing the small business community. The council directs its efforts in five major areas: ● simplifying forms and publications; ● streamlining internal policies, processes, and procedures; ● promoting consideration of burden reductions in rulings, regulations, and laws; ● assisting in the development of burden reduction measurement ● partnering with internal and external stakeholders to identify areas of potential burden reduction. IRS reports that this targeted, resource-intensive process has achieved significant reductions in burden: over 200 million burden hours since 2002. For example, it reports that about 95 million hours of taxpayer burden were reduced through increases in the income- reporting threshold on various IRS schedules. Another burden reduction initiative includes a review of the forms that 15 million taxpayers use to request an extension to the date for filing their tax returns. Similarly, EPA officials stated that they have established processes for reviewing information collections that supplement the standard PRA review process. These processes are highly detailed and evaluative, with a focus on burden reduction, avoiding duplication, and ensuring compliance with PRA. According to EPA officials, the impetus for establishing these processes was the high visibility of the agency’s information collections and the recognition, among other things, that the success of EPA’s enforcement mission depended on information collections being properly justified and approved: in the words of one official, information collections are the “life blood” of the agency. According to these officials, the CIO staff are not generally closely involved in burden reduction initiatives, because they do not have sufficient technical program expertise and cannot devote the extensive time required. Instead, these officials said that the CIO staff’s focus is on fostering high awareness within the agency of the requirements associated with information collections, educating and training the program office staff on the need to minimize burden and the impact on respondents, providing an agencywide perspective on information collections to help avoid duplication, managing the clearance process for agency information collections, and acting as liaison between program offices and OMB during the clearance process. To help program offices consider PRA requirements such as burden reduction and avoiding duplication as they are developing new information collections or working on reauthorizing existing collections, the CIO staff also developed a handbook to help program staff understand what they need to do to comply with PRA and gain OMB approval. In addition, program offices at EPA have taken on burden reduction initiatives that are highly detailed and lengthy (sometimes lasting years) and that involve extensive consultation with stakeholders (including entities that supply the information, citizens groups, information users and technical experts in the agency and elsewhere, and state and local governments). For example, EPA reports that it amended its regulations to reduce the paperwork burden imposed under the Resource Conservation and Recovery Act. One burden reduction method EPA used was to establish higher thresholds for small businesses to report information required under the act. EPA estimates that the initiative will reduce burden by 350,000 hours and save $22 million annually. Another EPA program office reports that it is proposing a significant reduction in burden for its Toxic Release Inventory program. Both the EPA and IRS programs involve extensive outreach to stakeholders, including the public. This outreach is particularly significant in view of the relatively low levels of public consultation that occur under the standard review process. As we reported in May 2005, public consultation on information collections is often limited to publication of notices in the Federal Register. As a means of public consultation, however, these notices are not effective, as they elicit few responses. An estimated 7 percent of the 60-day notices of collections in the Federal Register received one or more comments. According to our sample of all collections at the four agencies reviewed, the number of notices receiving at least one comment ranged from an estimated 15 percent at Labor to an estimated 6 percent at IRS. In contrast, according to EPA and IRS, their efforts at public consultation are key to their burden reduction efforts and an important factor in their success. Overall, EPA and IRS reported that their targeted processes produced significant reductions in burden by making a commitment to this goal and dedicating resources to it. In contrast, for the 12 information collections we examined, the CIO review process resulted in no reduction in burden. Further, the Department of Labor reported that its PRA reviews of 175 proposed collections over nearly 2 years did not reduce burden. Similarly, both IRS and EPA addressed information collections that had undergone CIO review and received OMB approval and nonetheless found significant opportunities to reduce burden. In our 2005 report, we concluded that the CIO review process was not working as Congress intended: It did not result in a rigorous examination of the burden imposed by information collections, and it did not lead to reductions in burden. In light of these findings, we recommended (among other things) that agencies strengthen the support provided for CIO certifications and that OMB update its guidance to clarify and emphasize this requirement. Since our report was issued, the four agencies have reported taking steps to strengthen their support for CIO certifications: ● According to the HUD CIO, the department established a senior- level PRA compliance officer in each major program office, and it has revised its certification process to require that before collections are submitted for review, they be approved at a higher management level within program offices. ● The Treasury CIO established an Information Management Sub- Council under the Treasury CIO Council and added resources to the review process. ● According to the VA’s 2007 budget submission, the department obtained additional resources to help review and analyze its information collection requests. ● According to the Office of the CIO at the Department of Labor, the department intends to provide guidance to components regarding the need to provide strong support for clearance requests and has met with component staff to discuss these issues. OMB reported that its guidance to agencies will be updated through a planned automated system, which is expected to be operational by the end of this year. According to the acting head of OMB’s Office of Information and Regulatory Affairs, the new system will permit agencies to submit clearance requests electronically, and the instructions will provide clear guidance on the requirements for these submissions, including the support required. This official stated that OMB has worked with agency representatives with direct knowledge of the PRA clearance process in order to ensure that the system and its instructions clearly reflect the requirements of the process. If this system is implemented as described and OMB withholds clearance from submissions that lack adequate support, it could lead agencies to strengthen the support provided for their certifications. In considering PRA reauthorization, the Congress has the opportunity to take into account ideas that were developed by the various experts at the PRA forum that we organized in 2005. These experts noted, as we have here, that the burden reduction goals in the act have not been met, and that in fact burden has been going up. They suggested first that the goal of reducing burden by 5 percent is not realistic, and also that such numerical goals do not appropriately recognize that some burden is necessary. The important point, in their view, is to reduce unnecessary burden while still ensuring maximum utility. Forum participants also questioned the level of attention that OMB devotes to the process of clearing collections on what they called a “retail” basis, focusing on individual collections rather than looking across numerous collections. In their view, some of this attention would be better devoted to broader oversight questions. In their discussion, participants mentioned that the clearance process informs OMB with respect to its other information resource management functions, but that this had not led to high-level integration and coordination. It was suggested that the volume of collections to be individually reviewed could impede such integration. Participants made a number of suggestions regarding ways to reduce the volume of collections that OMB reviews, with the goal of freeing OMB resources so that it could address more substantive, wide-ranging paperwork issues. Options that they suggested including limiting OMB review to significant and selected collections, rather than all collections. This would entail shifting more responsibility for review to the agencies, which they stated was one of the avowed purposes of the 1995 amendments: to increase agencies’ attention to properly clearing information collection requests. One way to shift this responsibility, the forum suggested, would be for OMB to be more creative in its use of the delegation authority that the act provides. (Under the act, OMB has the authority to delegate to agencies the authority to approve collections in various circumstances.) Also, participants mentioned the possibility of modifying the clearance process by, for example, extending beyond 3 years the length of time that OMB approvals are valid, particularly for the routine types of collections. This suggestion was paired with the idea that the review process itself should be more rigorous; as the panel put it, “now it’s a rather pro forma process.” They also observed that two Federal Register notices seemed excessive in most cases. To reduce the number of collections that require OMB review, another possibility suggested was to revise the PRA’s definition of an information collection. For example, the current definition includes all collections that contact 10 or more persons; the panel suggested that this threshold could be raised, pointing out that this low threshold makes it hard for agencies to perform targeted outreach to the public regarding burden and other issues (such as through customer satisfaction questionnaires or focus groups). However, they had no specific recommendation on what the number should be. Alternatively, they suggested that OMB could be given authority to categorize types of information collections that did not require clearance (for example, OMB could exempt collections for which the response is purely voluntary). Finally, the forum questioned giving agency CIOs the responsibility for reviewing information collections. According to the forum, CIOs have tended to be more associated with information technology issues than with high level information policy. Our previous work has not addressed every topic raised by the forum, so we cannot argue for or against all these suggestions. However, the work in our May 2005 report is consistent with the forum’s observations in some areas, including the lack of rigor in the review process and the questionable need for two Federal Register notices. I would like to turn here, Madam Chairman, to the matters for congressional consideration that we included in that report. We observed that to achieve burden reduction, the targeted approaches used by IRS and EPA were a promising alternative. However, the agencies’ experiences also suggest that to make such approaches successful requires top-level executive commitment, extensive involvement of program office staff with appropriate expertise, and aggressive outreach to stakeholders. Indications are that such an approach would also be more resource-intensive than the current process. Moreover, such an approach may not be warranted at all agencies, since not al agencies have the level of paperwork issues that face IRS and similar agencies. On the basis of the conclusions in our May 2005 report, we suggested that the Congress consider mandating the development of pilot projects to test and review the value of approaches to burden reduction similar to those used by IRS and EPA. OMB would issue guidance to agencies on implementing such pilots, including criteria for assessing collections along the lines of the process currently employed by IRS. According to our suggestion, agencies participating in such pilots would submit to OMB and publish on their Web sites (or through other means) an annual plan on the collections targeted for review, specific burden reduction goals for those collections, and a report on reductions achieved to date. We also suggested that in view of the limited effectiveness of the 60-day notice in the Federal Register in eliciting public comment, this requirement could be eliminated. Under a pilot project approach, an agency would develop a process to examine its information collections for opportunities to reduce burden. The experiences at IRS and EPA show that targeted burden reduction efforts depend on tapping the expertise of program staff, who are generally closely involved in the effort. That is, finding opportunities to reduce burden requires strong familiarity with the programs involved. Pilot projects would be expected to build on the lessons learned at IRS and EPA. For example, these agencies have used a variety of approaches to reducing burden, such as ● sharing information—for example, by facilitating cross-agency ● standardizing data for multiple uses (“collect once—use multiple integrating data to avoid duplication; and ● re-engineering work flows. Pilot projects would be most appropriate for agencies for which information collections are a significant aspect of the mission. As the results and lessons from the pilots become available, OMB may choose to apply them at other agencies by approving further pilots. Lessons learned from the mandated pilots could thus be applied more broadly. In developing processes to involve program offices in burden reduction, agencies would not have to impose a particular organizational structure for the burden reduction effort. For instance, the burden reduction effort might not necessarily be performed by the CIO. For example, at IRS, the Office of Burden Reduction is not connected to the CIO, whereas at EPA, CIO staff are involved in promoting burden reduction through staff education and outreach. However, the EPA CIO depends on program offices to undertake specific initiatives. Under a mandate for pilot projects, agencies would be encouraged to determine the approach that works best in their own situations. Finally, both IRS and EPA engaged in extensive outreach to the public and stakeholders. In many cases, this outreach involves contacts with professional and industry organizations, which are particularly valuable because they allow the agencies to get feedback without the need to design an information collection for the purpose (which would entail its own review process, burden estimate, and so on). According to agency officials, the need to obtain OMB approval for an information collection if they contact more than nine people often inhibits agencies’ use of questionnaires and similar types of active outreach to the public. Agencies are free, however, to collect comments on information posted on Web sites. OMB could also choose to delegate to pilot project agencies the authority to approve collections that are undertaken as part of public outreach for burden reduction projects. The work we reported in May and June 2005 strongly suggested that despite the importance of public consultation to burden reduction, the current approach is often ineffective. Federal Register notices elicit such low response that we questioned the need for two such notices (the 60-day notice during the agency review and the 30-day notice during the OMB review). Eliminating the first notice, in our view, is thus not likely to decrease public consultation in any significant way. Instead, our intent was for agencies, through pilot projects, to explore ways to perform outreach to information collection stakeholders, including the public, that will be more effective in eliciting useful comments and achieving real reductions in burden. In summary, Madam Chairman, the information collection review process appeared to have little effect on paperwork burden. As our review showed, the CIO review process, as currently implemented, tended to lack rigor, allowing agencies to focus on clearing an administrative hurdle rather than on performing substantive analysis. Going further, the expert forum characterized the whole clearance process as “pro forma.” The forum also made various suggestions for improving the clearance process; many of these were aimed at finding ways to reduce its absorption of OMB resources, such as by changing the definition of an information collection. Both we and the forum suggested removing one of the current administrative hurdles (the 60-day Federal Register notice). Although these suggestions refer to specific process improvements, the main point is not just to tweak the process. Instead, the intent is to remove administrative impediments, with the ultimate aim of refocusing agency and OMB attention away from the current concentration on administrative procedures and toward the goals of the act—minimizing burden while maximizing utility. To that end, we suggested that the Congress mandate pilot projects that are specifically targeted at reducing burden. Such projects could help to move toward the outcomes that the Congress intended in enacting PRA. Madam Chairman, this completes my prepared statement. I would be pleased to answer any questions. For future information regarding this testimony, please contact Linda Koontz, Director, Information Management, at (202) 512-6420, or [email protected]. Other individuals who made key contributions to this testimony were Timothy Bober, Barbara Collier, David Plocher, Elizabeth Powell, J. Michael Resser, and Alan Stapleton. Associate Executive Director Association of Research Libraries Princeton University; formerly National Opinion Research Center Founder and Executive Director, OMB Watch Administrative Law & Information Policy; formerly OMB and IRS Privacy Officer Vice President, SRA International; formerly OMB Director, Center for Technology in Government, University at Albany, State University of New York Partner, Guerra, Kiviat, Flyzik and Associates; formerly Department of Treasury and Secret Service Privacy & Information Policy Consultant; formerly counsel to the House of Representatives’ Subcommittee on Information, Justice, Transportation and Agriculture Editor of Access Reports, FOIA Journal Independent Consultant; formerly OMB Professor and Independent Consultant; formerly OMB Manager, Regulatory Policy National Federation of Independent Business Director, Public Policy, Software & Information Industry Association Senior Scientist, Computer Science and Telecommunications Board, The National Academies Fellow in Law and Government, American University, Washington College of Law; formerly Administrative Conference of the U.S. At the forum, others attending included GAO staff and a number of other observers: Deputy Administrator, Office of Information and Regulatory Affairs Minority Senior Legislative Counsel, House Committee on Government Reform Minority Counsel, House Committee on Government Reform Policy Analyst, Office of Information and Regulatory Affairs Director of Energy and Environment, U.S. Chamber of Commerce Senior Policy Analyst, Office of Management and Budget Policy Analyst, Office of Management and Budget Minority Professional Staff Member, House Committee on Government Reform Counsel, House Committee on Government Reform Policy Analyst, Office of Management and Budget Office of Information and Regulatory Affairs Senior Professional Staff Member, House Committee on Government Reform House Committee on Government Reform Branch Chief, Statistics Branch, Office of Information and Regulatory Affairs In addition, staff of the National Academies’ National Research Council, Computer Science and Telecommunications Board, helped to develop and facilitate the forum: Charles Brownstein, Director; Kristen Batch, Research Associate; and Margaret Huynh, Senior Program Assistant. Paperwork Reduction Act: Subcommittee Questions Concerning the Act’s Information Collection Provisions. GAO-05-909R. Washington, D.C.: July 19, 2005. Paperwork Reduction Act: Burden Reduction May Require a New Approach. GAO-05-778T. Washington, D.C.: June 14, 2005. Paperwork Reduction Act: New Approach May Be Needed to Reduce Government Burden on Public. GAO-05-424. Washington, D.C.: May 20, 2005. Paperwork Reduction Act: Agencies’ Paperwork Burden Estimates Due to Federal Actions Continue to Increase. GAO-04-676T. Washington, D.C.: April 20, 2004. Paperwork Reduction Act: Record Increase in Agencies’ Burden Estimates. GAO-03-619T. Washington, D.C.: April 11, 2003. Paperwork Reduction Act: Changes Needed to Annual Report. GAO-02-651R. Washington, D.C.: April 29, 2002. Paperwork Reduction Act: Burden Increases and Violations Persist. GAO-02-598T. Washington, D.C.: April 11, 2002. Information Resources Management: Comprehensive Strategic Plan Needed to Address Mounting Challenges. GAO-02-292. Washington, D.C.: February 22, 2002. Paperwork Reduction Act: Burden Estimates Continue to Increase. GAO-01-648T. Washington, D.C.: April 24, 2001. Electronic Government: Government Paperwork Elimination Act Presents Challenges for Agencies. GAO/AIMD-00-282. Washington, D.C.: September 15, 2000. Tax Administration: IRS Is Working to Improve Its Estimates of Compliance Burden. GAO/GGD-00-11. Washington, D.C.: May 22, 2000. Paperwork Reduction Act: Burden Increases at IRS and Other Agencies. GAO/T-GGD-00-114. Washington, D.C.: April 12, 2000. EPA Paperwork: Burden Estimate Increasing Despite Reduction Claims. GAO/GGD-00-59. Washington, D.C.: March 16, 2000. Federal Paperwork: General Purpose Statistics and Research Surveys of Businesses. GAO/GGD-99-169. Washington, D.C.: September 20, 1999. Paperwork Reduction Act: Burden Increases and Unauthorized Information Collections. GAO/T-GGD-99-78. Washington, D.C.: April 15, 1999. Paperwork Reduction Act: Implementation at IRS. GAO/GGD-99-4. Washington, D.C.: November 16, 1998. Regulatory Management: Implementation of Selected OMB Responsibilities Under the Paperwork Reduction Act. GAO/GGD- 98-120. Washington, D.C.: July 9, 1998. Paperwork Reduction: Information on OMB’s and Agencies’ Actions. GAO/GGD-97-143R. Washington, D.C.: June 25, 1997. Paperwork Reduction: Governmentwide Goals Unlikely to Be Met. GAO/T-GGD-97-114. Washington, D.C.: June 4, 1997. Paperwork Reduction: Burden Reduction Goal Unlikely to Be Met. GAO/T-GGD/RCED-96-186. Washington, D.C.: June 5, 1996. Environmental Protection: Assessing EPA’s Progress in Paperwork Reduction. GAO/T-RCED-96-107. Washington, D.C.: March 21, 1996. Paperwork Reduction: Burden Hour Increases Reflect New Estimates, Not Actual Changes. GAO/PEMD-94-3. Washington, D.C.: December 6, 1993. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Americans spend billions of hours each year providing information to federal agencies by filling out forms, surveys, or questionnaires. A major aim of the Paperwork Reduction Act (PRA) is to minimize the burden that these information collections impose on the public, while maximizing their public benefit. Under the act, the Office of Management and Budget (OMB) is to approve all such collections. In addition, agency Chief Information Officers (CIO) are to review information collections before they are submitted to OMB for approval and certify that these meet certain standards set forth in the act. GAO was asked to testify on the implementation of the act's provisions regarding the review and approval of information collections. For its testimony, GAO reviewed previous work in this area, including the results of an expert forum on information resources management and the PRA, which was held in February 2005 under the auspices of the National Research Council. GAO also drew on its earlier study of CIO review processes (GAO-05-424) and alternative processes that two agencies have used to minimize burden. For this study, GAO reviewed a governmentwide sample of collections, reviewed processes and collections at four agencies that account for a large proportion of burden, and performed case studies of 12 approved collections. Among the PRA provisions aimed at helping to achieve the goals of minimizing burden while maximizing utility is the requirement for CIO review and certification of information collections. GAO's review of 12 case studies showed that CIOs provided these certifications despite often missing or inadequate support from the program offices sponsoring the collections. Further, although the law requires that support be provided for certifications, agency files contained little evidence that CIO reviewers had made efforts to get program offices to improve the support they offered. Numerous factors have contributed to these problems, including a lack of management support and weaknesses in OMB guidance. Because these reviews were not rigorous, OMB, the agency, and the public had reduced assurance that the standards in the act--such as minimizing burden--were consistently met. To address the issues raised by its review, GAO made recommendations to the agencies and OMB aimed at strengthening the CIO review process and clarifying guidance. OMB and the agencies report making plans and taking steps to address GAO's recommendations. Beyond the collection review process, the Internal Revenue Service (IRS) and the Environmental Protection Agency (EPA) have set up processes that are specifically focused on reducing burden. These agencies, whose missions involve numerous information collections, have devoted significant resources to targeted burden reduction efforts that involve extensive public outreach. According to the two agencies, these efforts led to significant reductions in burden. For example, each year, IRS subjects a few forms to highly detailed, in-depth analyses, reviewing all data requested, redesigning forms, and involving stakeholders (both the information users and the public affected). IRS reports that this process--performed on forms that have undergone CIO review and received OMB approval--has reduced burden by over 200 million hours since 2002. In contrast, for the 12 case studies, the CIO review process did not reduce burden. When it considers PRA reauthorization, the Congress has the opportunity to promote new approaches, including alternatives suggested by the expert forum and by GAO. Forum participants made a range of suggestions on information collections and their review. For example, they suggested that OMB's focus should be on broad oversight rather than on reviewing each individual collection and observed that the current clearance process appeared to be "pro forma." They also observed that it seemed excessive to require notices of collections to be published twice in the Federal Register, as they are now. GAO similarly observed that publishing two notices in the Federal Register did not seem to be effective, and suggested eliminating one of these notices. GAO also suggested that the Congress mandate pilot projects to target some collections for rigorous analysis along the lines of the IRS and EPA approaches. Such projects would permit agencies to build on the lessons learned by the IRS and EPA and potentially contribute to true burden reduction. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The five major Army inventory control points manage secondary items and repair parts valued at $17 billion. These items are used to support Army track and wheeled vehicles, aircraft, missiles, and communication and electronic systems. The process for identifying the items and the quantity to stock begins with developing the budget request—the key to effective inventory management. If too few or the wrong items are available to support the forces, then readiness suffers and the forces may not be able to perform their assigned military missions. On the other hand, if too many items are acquired, then limited resources are wasted and unnecessary costs are incurred to manage and maintain the items. The Army uses different processes for determining its spare and repair parts budget requests and for determining which parts to buy or repair. The process for determining spare and repair parts budget requests is based on data from the budget stratification reports, which show the dollar value of requirements and inventory available to meet the requirements. When an item’s available inventory is not sufficient to meet the requirements, it is considered to be in a deficit position. The aggregate value of items in a deficit position then becomes the Army’s basis for determining its spare and repair parts needs. As these needs are formulated into a budget request, the end result (budget request) is normally less than the aggregate value of items in a deficit position. This makes it even more important that the true needs be based on accurate data. Otherwise, funds may be allocated to procuring spare and repair parts that should be spent on other priority needs. Using accurate data in the requirements determination process avoids such misallocation of funds. We have previously issued reports pointing out data inaccuracy problems in the Army’s requirements determination process and the effect of these inaccuracies on inventory decisions. See appendix IV. The process for determining which items to buy or repair is based on information in the item’s supply control study, which is automatically prepared when an item reaches a point when insufficient assets are available or due in to meet requirements. When a study is prepared, the item manager validates the requirements and asset information in the study. Based on the results of the validated data, the item manager will decide whether to buy, repair, or not buy the quantity recommended by the study. As of September 30, 1994, we reviewed 258 items from a universe of 8,526 items with a deficit inventory position. The selected items represented 3 percent of the items in a deficit position but accounted for $519 million, or 69 percent, of the $750 million deficit inventory value. We found that 94 of the 258 items, with a reported deficit inventory value of $211 million, had data errors that affected the items’ requirements or inventory available to satisfy the requirements. Table 1 shows the results of our review for the Army’s inventory control points. Overstated requirements and understated inventory levels were the major reasons items were erroneously reported in a deficit position. In addition, some items were incorrectly included in the process for determining funding requirements. If the items’ inventory position had been correctly reported, the true deficit value for the 94 items would have been about $23 million rather than $211 million. Table 2 shows the major reasons why items were incorrectly classified as deficit. When insufficient inventory is on hand and due in to meet an item’s requirements, the budget stratification process will report the item as being deficit. If the item’s deficit position is caused by overstated requirements, this means that resources could be wasted buying unneeded items. As shown in table 2, overstated requirements caused 53 items to be erroneously reported as being in a deficit position. The overstated requirements resulted from inaccurate demand data, inaccurate leadtime data, and lower-than-expected requirements. Table 3 shows the number of instances where these reasons caused the items’ requirements to be overstated. The following examples illustrate the types of inaccurate data that caused overstated requirements: The item manager for an aircraft floor item used on the CH-47 Chinook helicopter said that the database still included demands from Operations Desert Shield and Desert Storm. Including these demands in the requirements determination caused the budget stratification process to erroneously classify the item as having a deficit inventory position of about $500,000. If the outdated demands had been purged from the system, the item would not have been in a deficit position. According to the item manager for the front lens assembly item used on the AN/PVS-7B Night Vision Goggles, the item requirements shown in the budget stratification report did not materialize. She said that the report showed the item as having a deficit inventory position of $2.4 million. However, when it came time to procure the item, the project leader reduced the planned procurement quantity because the field units indicated they did not like the item. The item’s actual deficit position should have been only $18,000. According to the item manager, an angle drive unit used on the M2/M3, M2A1/M3A1 Bradley Fighting Vehicle system had an inflated safety level requirement in the budget stratification report. The report showed a safety level of 6,887 units instead of the correct safety level of 355. As a result, a deficit inventory position of $6.6 million was reported. When a prime stock number has authorized substitute items, the requirements and inventory for the prime and substitute items are supposed to be added and shown as one requirement and one inventory level under the prime number. This did not happen. The requirements for both types of items were shown as one requirement but the inventory was not. As a result, the inventory to meet the overall requirement was understated, and the item was placed in a deficit position. For example, according to the item manager for a night window assembly used on the TOW subsystem for the M2/M3 Bradley Fighting Vehicle, the budget stratification report showed a deficit supply position of $800,000 for the item. This occurred because inventory belonging to a substitute item was not counted toward the prime item’s requirements. The item manager said the true deficit for the assembly was $65,000. There were also requirements problems for items being repaired at maintenance facilities. The requirements system did not accurately track stock in transit between overhaul facilities and the depots. According to item managers at several inventory control points, they had problems either tracking the physical movement of inventory between the depots and repair facilities, or ensuring that all records were processed so the database accurately accounted all applicable assets. These problems could cause items to be erroneously reported as being in a deficit position. Table 4 shows how often these reasons resulted in understated inventory levels. Our review of selected items identified nine items that should have been excluded from the budget stratification process. By including these items, the budget stratification process identified funding needs for the items when, in fact, the funds to procure the items were being provided by another service, a foreign country under a foreign military sales agreement, or another appropriation. Table 5 shows the number of items that were incorrectly included in the budget stratification process. The following examples illustrate the effect of including “excluded” items in the budget stratification process: According to the item manager for a fire control electronic unit used on the M1A2 main battle tank, the Army issued a contract in August 1993 to procure items to meet the Army’s requirements as well as foreign military sales. Because the Army is reimbursed for foreign military sale items, these items should have been excluded from the budget stratification process. However, the items were included in the stratification process and were reported as having a deficit inventory position of $2.3 million. The inventory control point procured a gas-particulate filter unit used in producing modular collective protective equipment. According to the item manager, procurement appropriation funds, provided by the program manager’s office, were used to buy the items. Because the stratification process is only supposed to deal with items procured by the Defense Business Operating Fund, the item should not have been included in the stratification process and a deficit inventory position of about $800,000 should not have been reported. According to the item manager, the Air Force manages and makes all procurements for a panel clock item. The Army’s budget stratification report showed this item had a deficit inventory position of $700,000. However, because the Air Force managed this item, the panel clock should not have been coded as an Army secondary item for inclusion in the budget stratification report. The item manager for an electronic component item said that the item should have been coded as an inventory control point asset rather than a project manager’s office asset. Because project manager items are not available for general issue, these items were not counted against the item’s requirements in the budget stratification report. If these items had been properly coded, the item would not have been reported as having a $700,000-deficit inventory position. According to the item manager, an electronic component item should have been coded as a major end item rather than a secondary item and not included in the budget stratification process. The item was reported as having a deficit inventory position of $500,000. The Army is aware of many of the processing, policy, and data problems affecting the accuracy of the requirements data. Furthermore, the Army has identified 32 change requests to correct problems with the requirements determination and supply management system. According to Army officials, the cost to implement the 32 change requests would be about $660,000, and in their opinion, the benefits would greatly outweigh the added costs. The officials said these changes would correct many of the problems, including some of the ones we identified during our review. Nevertheless, not all of the requests have been approved for funding because the Department of Defense is developing a standard requirements system as part of its Corporate Information Management initiative and does not want to spend resources to upgrade existing systems. As a result, it has limited the changes that the services can make to their existing systems. Army officials said that the standard system is not expected to be implemented for at least 4 years. Furthermore, major parts of the existing system will probably be integrated into the standard system. Therefore, unless the data problems are corrected, they will be integrated into the standard system and the Army will still not have reliable data. Army officials also cited examples where processing change requests are needed to correct other data problems in the requirements determination system. For example, the depots do not always confirm material release orders/disposal release orders received from the inventory control points. As a result, the inventory control points do not know if the depots actually received the orders. They identified numerous instances where the depots put the release orders in suspense because of higher priority workloads. This resulted in the release orders not being processed in a timely manner, processed out of sequence, or lost and not processed at all. Because the inventory control points could not adequately track the release orders, they could have reissued the release orders. The reissuance could have caused duplicate issues or disposals, imbalances in the records between the inventory control points and the depots, and poor supply response to the requesting Army units. A system change request was initiated in November 1994 to address this problem, but the request has not yet received funding approval. Although Army officials could not provide a cost estimate to implement the change request, it could save about $1 million in reduced workload for the inventory control points and depots. According to Army officials, one programming application in the requirements determination system uses reverse logic to calculate the supply positions of serviceable and unserviceable assets. It compares the supply position of all serviceable assets to the funded approved acquisition objective (current operating and war reserve requirements). However, for the same item, the program compares the supply position of all unserviceable assets to the total of the current operating and war reserve requirements, the economic retention quantity, and contingency quantity. The effect of this is that serviceable inventory can be sent to disposal while unserviceable inventory is being returned to the depots. According to Aviation and Troop Command records, the Command disposed of $43.5 million of serviceable assets at the same time that $8.5 million of unserviceable assets, of the same kind, were returned to the depots between March and September 1994. By September 1995, the Command had disposed of $62 million of serviceable assets. Command officials said that a system change request was initiated in November 1994 to correct the programming logic problem. However, the request did not receive funding approval because it violated Department of Army policy, even though the estimated cost to implement the change request would be less than $20,000. Although this change will not reduce the reported deficit quantities, it will allow the commands to keep more serviceable items in lieu of unserviceables, and it will reduce overhaul costs. Furthermore, according to Command records, this policy is causing the disposal of high-dollar, force modernization items that could result in re-procurement and adversely affect stock availability to field units. We recommend that the Secretary of Defense direct the Secretary of the Army to proceed with the pending system change requests to correct the data problems. Doing so could correct many of the problems identified in our report. Furthermore, the corrective actions would improve the overall reliability and usability of information for determining spare and repair parts requirements. The Department of Defense agreed with the report findings and partially agreed with the recommendation. It said that instead of the Secretary of Defense directing the Army to proceed with the system change request, the Army will be requested to present a request for funding for the system changes to the Corporate Configuration Control Board at the Joint Logistics Systems Center. The Board, as part of the Corporate Information Management initiative, was established to consider and resolve funding matters related to changes to existing systems. In our opinion, the action proposed by the Department of Defense achieves the intent of our recommendation, which was for the Army to seek funds to correct the data problems in its requirements determination system. Defense’s comments are presented in their entirety in appendix II. We are sending copies of this report to the Secretary of the Army; the Director, Office of Management and Budget; and the Chairmen, House Committee on Government Reform and Oversight, Senate Committee on Governmental Affairs, the House and Senate Committees on Appropriations, House Committee on National Security, and Senate Committee on Armed Services. Please contact me on (202) 512-5140 if you have any questions concerning this report. Major contributors to this report are listed in appendix III. We held discussions with responsible officials and reviewed Army regulations to determine the process used by the Army to identify its spare and repair parts needs for its budget development process. We focused on the process used to identify items in a deficit position. As part of these discussions, we also studied the budget stratification process, which is the major database input used in the budget development process. To identify the items in a deficit position, we obtained the September 30, 1994, budget stratification data tapes for the five Army inventory control points: Army Munitions and Chemical Command, Aviation and Troop Command, Communications-Electronics Command, Missile Command, and Tank-Automotive Command. From the total universe of 8,526 secondary items with a deficit inventory position valued at $750 million, we selected all items that had a deficit position of $500,000 or more. This resulted in a sample of 258 items with a total inventory deficit position of $519 million, or 69 percent of the total deficit. For each of the 258 selected items, we obtained information from the responsible item manager to determine whether the item was actually in a deficit position as of September 30, 1994. For those items that the budget stratification process had erroneously placed in a deficit position, we determined the reason for its misclassification. We obtained this information by reviewing item manager files and discussing the items with responsible item management personnel. We categorized the reasons for the erroneous classifications to determine frequency distribution for each type of reason. We then determined through discussions with item management officials and review of system change requests what actions were taken or planned to correct the identified problems. We performed our review from October 1994 to July 1995 in accordance with generally accepted government auditing standards. Army Inventory: Growth in Inventories That Exceed Requirements (GAO/NSIAD-90-68, Mar. 22, 1990). Defense Inventory: Shortcomings in Requirements Determination Processes (GAO/NSIAD-91-176, May 10, 1991). Army Inventory: Need to Improve Process for Establishing Economic Retention Requirements (GAO/NSIAD-92-84, Feb. 27, 1992). Army Inventory: More Effective Review of Proposed Inventory Buys Could Reduce Unneeded Procurement (GAO/NSIAD-94-130, June 2, 1994). Defense Inventory: Shortages Are Recurring, But Not a Problem (GAO/NSIAD-95-137, Aug. 7, 1995). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO reviewed the: (1) accuracy of the databases used to determine Army spare and repair parts requirements and inventory levels for Defense Business Operations Fund budget requests; and (2) actions taken to correct data problems that could affect the reliability of these budget estimates. GAO found that: (1) the Army's 1994 budget report contained numerous inventory data inaccuracies which led to erroneous reports of deficit inventory positions for several items; (2) overstated requirements and understated inventory levels were the major cause of most of the false deficit position reports; (3) the actual deficit position value for 94 items was about ten-fold less than what was reported; (4) some items should have been excluded from the budget stratification process; (5) although the Army is aware of many requirements data problems and has identified several change requests to correct these problems, the Army has not been able to correct these problems because the Department of Defense (DOD) is developing a standard requirements determination system for all the services and has limited how much the services can spend to change their existing systems; (6) the new DOD standard system will not be implemented for 4 years and most of its existing data will be integrated into that system; and (7) the Army cannot ensure that its budget requests represent its actual funding needs for spare and repair parts, that the new system will receive accurate data when it is implemented, or that expensive usable items will not be discarded and reprocured. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Most of the funding in DOD’s fiscal year 1997 aircraft investment strategy is for the procurement of new aircraft such as the F/A-18E/F, F-22, and Joint Strike Fighter (JSF), while some is for the retrofit or remanufacture of existing aircraft, such as the AV-8B and the Longbow Apache. Table 1 describes the 17 aircraft programs and their estimated procurement funding requirements and appendix I provides details on these programs. DOD is pursuing these aircraft programs at a time when the federal government is likely to be faced with significant budgetary pressure for the foreseeable future. This pressure comes from efforts to balance the budget, coupled with funding demands for such programs as Social Security, Medicare, and Medicaid. Consequently, there is likely to be limitations on all discretionary spending, including defense spending, for the long term. This report addresses the availability of funding to support DOD’s aircraft investment strategy as planned prior to the Quadrennial Defense Review, but does not address specific aircraft requirements. Our previous reports have questioned the need for and timing of a number of DOD’s aircraft procurements. (A listing of prior reports is provided at the end of this report.) DOD asserts that its aircraft modernization programs are affordable as planned. On June 27, 1996, DOD officials testified before House Subcommittees that its overall aircraft investment plans were within historical norms and affordable within other service priorities. The officials further explained that the historical norms referred to were based on the aircraft funding experience of the early 1980s. Our review indicated that using the early to mid-1980s, the peak Cold War defense spending years, as a historical norm for future aircraft investments is not realistic in today’s budgetary and force structure environment. As shown in figure 1, DOD’s overall appropriations, expressed in fiscal year 1997 dollars, have decreased significantly from their high point in fiscal year 1985, and the amounts appropriated in recent years are at, or near, the lowest point over the past 24 years. As shown in figure 1, our review of aircraft procurement funding data from fiscal years 1973 through 1996, showed that funding for DOD’s aircraft purchases as a percentage of DOD’s overall budget fluctuated in relation to the changes in DOD’s overall budget. Funding for aircraft purchases increased significantly as DOD’s overall funding increased in the early 1980s and decreased sharply as the defense budget decreased in the late 1980s and early 1990s. In contrast, DOD’s planned aircraft investment strategy does not follow this pattern and calls for significantly increased funding for aircraft purchases during a period when DOD’s overall funding is expected to remain stable in real terms. Funding for DOD’s aircraft purchases was at its highest point, both in dollar terms and as a percentage of the overall DOD budget, during the early to mid-1980s. Figure 2 shows the 24-year funding history for DOD’s aircraft purchases from fiscal years 1973 through 1996. During that period, DOD spending on aircraft purchases fluctuated somewhat but averaged about 4.8 percent of the overall DOD budget. From fiscal years 1982 through 1986, DOD used from 6.0 percent to 7.7 percent of its overall annual funding on aircraft purchases. In contrast, since fiscal year 1973, the next highest level of annual aircraft funding was 5.5 percent in fiscal year 1989 and, in 12 other years, the funding was less than 4.5 percent of the overall DOD funding. Therefore, a long-term average would be more appropriate than early 1980’s historical norms as a benchmark for an analysis of funding patterns, and its use would even out the high aircraft procurement funding of the early 1980s and the lower funding of the post-Vietnam and post-Cold War eras. However, such a benchmark should not be used as a threshold for spending on aircraft purchases because it may not reflect the changed nature of the defense requirements and U.S. strategy that occurred with the end of the Cold War. If DOD’s aircraft investment strategy is implemented as planned and the defense budget stabilizes at DOD’s currently projected fiscal year 2003 level (about $247 billion in constant fiscal year 1997 dollars), DOD’s projected funding for aircraft purchases will exceed the historical average percentage of the defense budget for aircraft purchases in all but 1 year between fiscal year 2000 and 2015. For several years, it will approach the highest historical percentages of the defense budget for aircraft purchases. Those high percentages were attained during the peak Cold War spending of the early to mid-1980s. In fiscal year 1996, DOD spent $6.8 billion, or 2.6 percent of its overall budget, on aircraft purchases. To implement its aircraft investment strategy, DOD expects to increase its annual spending on aircraft purchases significantly from current levels and to sustain those higher levels for the indefinite future. For example, as shown in figure 4, DOD’s annual spending on aircraft purchases is projected to increase about 94 percent from the fiscal year 1996 level to $13.2 billion by fiscal year 2002. Also, for 15 of the next 20 fiscal years beginning in fiscal year 1997, DOD’s projected spending for aircraft purchases is expected to equal or exceed $11.9 billion annually. For 3 years during this period, DOD’s projected annual spending on aircraft purchases will exceed $16 billion (6.5 percent of the budget) and for 1 of those years, it will exceed $18 billion (7.3 percent of the budget). In the current security and force structure environment, the need for that level of additional funding has not been made clear by DOD. Furthermore, other than stating that overall procurement funding in general will be increased, DOD has not identified specific reductions elsewhere within the procurement account or within the other major accounts to offset the significant proposed increases in aircraft procurement funding. Because the overall level of defense funding is expected to be stable, at best, any proposed increase in spending for a particular account or for a project will have to be offset elsewhere within the budget. Historically, acquisition programs almost always cost more than originally projected. Figure 4 is a conservative projection of DOD’s aircraft funding requirements because no cost growth beyond current estimates is considered. Research has shown that unanticipated cost growth has averaged at least 20 percent over the life of aircraft programs. For at least one current program, it appears the historical patterns will be repeated. In January 1997, DOD reported that the procurement cost of the F-22 was expected to increase by over 20 percent and devised significant initiatives to offset that growth. We reported about this potential cost growth in June 1997 and concluded that the initiatives to offset the cost growth were optimistic. In addition, the projected funding requirements shown in figures 3 and 4 may be understated because they do not include any projected funding for other aircraft programs that have not been approved for procurement. For example, potential requirements exist to replace the KC-135, C-5A, F-15E, F-117, EA-6B, S-3B, and other aircraft. Adding any of these requirements to DOD’s aircraft investment strategy would further complicate the funding problems. The amount of funding likely to be available for national defense in the near term has been projected by both the President and the Congress. Both have essentially agreed that the total national defense budget will not increase measurably in real terms through fiscal year 2002. While the Congress has not expressed its sentiments regarding the defense budget beyond fiscal year 2002, last year DOD’s long-term planning for its aircraft investment strategy assumed a real annual growth factor of 1 percent. Accordingly, procurement funding to accomplish the aircraft modernization programs was partially dependent on some level of real growth in the defense budget. However, because of commitments to balance the federal budget by both the President and the Congress, it appears likely that the defense budget will stabilize at current levels or decrease further, rather than increase as DOD’s aircraft investment plans have assumed. According to DOD officials, the long-term planning now assumes no real growth in the defense budget. The impact of this change on DOD’s aircraft programs is not yet clear. DOD plans to increase overall funding for procurement programs over the next few years, and the aircraft programs are expected to be a prime beneficiary of that increased funding. DOD expects to increase procurement spending to a level of approximately $61.2 billion per year, from the current level of about $44.3 billion per year, while keeping overall defense spending at current levels, at least through fiscal year 2002. Of the $39.0 billion cumulative increase in procurement spending that is expected through fiscal year 2002, about $17.7 billion is projected to be used for DOD’s aircraft investment strategy. To increase procurement funding while keeping overall defense spending at current levels, DOD anticipates major savings will be generated from infrastructure reductions and acquisition reform initiatives, as well as increased purchasing power through significantly lower inflation projections. We found, however, that there are unlikely to be sufficient savings available to offset DOD’s projected procurement increases. DOD’s planned procurement funding increase was partially predicated on base closure savings of $17.8 billion (then-year dollars) through fiscal year 2001, a component of infrastructure, and shifting this money to pay for additional procurement. In 1996, however, we found no significant net infrastructure savings between fiscal year 1996 and 2001 because the proportion of infrastructure in the DOD budgets was projected to remain relatively constant through fiscal year 2001. Therefore, through fiscal year 2001, DOD will have less funds available than expected for procurement from its infrastructure reform initiatives. In addition, our ongoing evaluation of acquisition reform savings on major weapon systems suggests that the amount of such savings that will be available to increase procurement spending is uncertain. Our work shows that the savings from acquisition reform have been used by the very programs generating the savings to fund other needs. This raises concern as to whether the latest acquisition reform initiatives will provide savings to realize modernization objectives for other weapons systems within the time frames envisioned. Without the level of savings expected from infrastructure reductions and acquisition reform, DOD will face difficult choices in funding its modernization plans. Finally, based on changes in future inflation factors, DOD calculated in its 1997 future years defense plan (FYDP) that its purchases of goods and services from fiscal years 1997 through 2002 would cost about $34.7 billion (then-year dollars) less than it had planned in its 1996 FYDP. The “inflation dividend” allowed DOD to include about $19.5 billion in additional programs in fiscal years 1997-2001 and permitted the executive branch to reduce DOD’s projected funding by $15.2 billion over the same time period. However, using different inflation estimates, CBO calculated the cost reduction at only $10.3 billion, or $24.4 billion less than DOD’s estimate. Because DOD’s projected funding was reduced by $15.2 billion, CBO’s estimate indicates that DOD’s real purchasing power, rather than increasing, may be reduced by about $5 billion. If true, then DOD may have to make adjustments in its programs. We recently raised an issue on the Air Force’s F-22 air superiority fighter that further complicates the situation. In estimating the cost to produce the F-22, the Air Force used an inflation rate of about 2.2 percent per year for all years after 1996. However, in agreeing to restructure the F-22 program to address the recently acknowledged $15 billion (then-year dollars) program cost increase, the Air Force and its contractors used an inflation rate of 3.2 percent per year. Increasing the inflation rate by 1 percent added billions of dollars to the F-22 program’s estimated cost. We are concerned that the higher inflation rates could have a significant budgetary impact for other DOD acquisition programs. Similar increases on other major weapon programs would add billions of dollars to the amounts needed and further jeopardize DOD’s ability to fund its modernization plans. The basis for DOD’s projections of total annual procurement funding is the cumulative annual funding needs of multiple weapons programs, each of which has typically been based on optimistic assumptions about procurement quantities and rates. Accordingly, DOD’s projections of total annual procurement funding have been consistently optimistic. DOD’s traditional approach to managing affordability problems is to reduce procurement quantities and extend production schedules without eliminating programs. Such actions normally result in significantly increased system procurement costs and delayed deliveries to operational units. We recently reported that the costs for 17 of 22 full-rate production systems we reviewed increased by $10 billion (fiscal year 1996 dollars) beyond original estimates through fiscal year 1996 due to stretching out the completion of the weapons’ production. We found that DOD had inappropriately placed a high priority on buying large numbers of untested weapons during low-rate initial production to ensure commitment to new programs and thus had to cut by more than half its planned full-rate production for many weapons that had already been tested. We also found that actual production rates were, on average, less than half of originally planned rates. Primarily because of funding limitations, DOD has reduced the annual full-rate production for 17 of the 22 proven weapons reviewed, stretching out the completion of the weapons’ production an average of 8 years (or 170 percent) longer than planned. Our work showed that DOD develops weapon system acquisition strategies that are based on optimistic projections of funding that are rarely achieved. As a result, a significant number of DOD’s weapon systems are not being procured at planned production rates, leading to program stretchouts and billions of dollars of increased costs. If DOD bought weapons at minimum rates during low-rate initial production, more funds would be available to buy proven weapons in full-rate production at more efficient rates and at lower costs. If DOD’s assumptions regarding future spending for its aircraft programs do not materialize, DOD may need to (1) reduce funding for some or all of the aircraft programs; (2) reduce funding for other procurement programs; (3) implement changes in infrastructure, operations, or other areas; or (4) increase overall defense funding. In other words, the likelihood of program stretchouts and significantly increased costs is very real. As the Nation proceeds into the 21st century faced with the prospect of a constrained budget, we believe DOD needs to take action now to address looming affordability problems with its aircraft investment strategy. Action needs to be taken now because, if major commitments are made to the initial procurement of all the planned aircraft programs (such as the F/A-18E/F, F-22, JSF, and the V-22) over the next several years, a significant imbalance is likely to result between funding requirements and available funding. Such imbalances have historically led to program stretchouts, higher unit costs, and delayed deliveries to operational units. Further, this imbalance may be long-term in nature, restricting DOD’s ability to respond to other funding requirements. DOD needs to reorient its aircraft investment strategy to recognize the reality of a constrained overall defense budget for the foreseeable future. Accordingly, instead of continuing to start aircraft procurement programs that are based on optimistic assumptions about available funds, DOD should determine how much procurement funding can realistically be expected and structure its aircraft investment strategy within those levels. DOD also needs to provide more concrete and lasting assurance that its aircraft procurement programs are not only militarily justified in the current security environment but clearly affordable as planned throughout their entire procurement. The key to ensuring the efficient production of systems is program stability. Understated cost estimates and overly optimistic funding assumptions result in too many programs chasing too few dollars. We believe that bringing realism to DOD’s acquisition plans will require very difficult decisions because programs will have to be terminated. While all involved may agree that there are too many programs chasing too few dollars, and could probably agree on the need to bring stability and executability to those programs that are pursued, it will be much more difficult to agree on which programs to cut. Nevertheless, the likelihood of continuing fiscal constraints and reduced national security threats should provide additional incentives for real progress in changing the structure and dominant culture of DOD’s weapon system acquisition process. Therefore, we recommend that the Secretary of Defense, in close consultation with the defense and budget committees of the Congress, define realistic, long-term projections of overall defense funding and, within those amounts, the portion of the annual procurement funding that can be expected to be made available to purchase new or significantly improved aircraft. In developing the projections, the Secretary should consider whether the historical average percentage of the total budget for aircraft purchases is appropriate in today’s security and budgetary environment. We also recommend that the Secretary reassess and report to the Congress on the overall affordability of DOD’s aircraft investment strategy in light of the funding that is expected to be available. The Secretary should clearly identify the amount of funding required by source, including (1) any projected savings from infrastructure and acquisition reform initiatives and (2) any reductions elsewhere within the procurement account or within the other major accounts. We further recommend that the Secretary fully consider the availability of long-term funding for any aircraft program before approving the procurement planned for that system. In commenting on a draft of this report, DOD partially concurred with our recommendations and stated that it is fully aware of the investment challenge highlighted in this report. DOD stated that its recent Quadrennial Defense Review addressed the affordability of the modernization programs that it believes are needed to meet the requirements of the defense strategy. The Quadrennial Defense Review recommended reductions in aircraft procurement plans. However, even to modernize the slightly smaller force that will result from the Quadrennial Defense Review, DOD believes that procurement funding must also rise to about $60 billion annually by fiscal year 2001, from about $44 billion in fiscal year 1997. Recognizing that overall defense budgets are not likely to increase substantially for the foreseeable future, DOD indicated that the additional procurement funds would be created by continuing efforts to reduce the costs of defense infrastructure and to fundamentally reengineer its business practices. Our recent reviews of DOD’s previous initiatives to reduce the costs of defense infrastructure and reengineer business practices indicate that the amount and availability of savings from such initiatives may be substantially less than DOD has estimated. If the projected savings do not materialize as planned, or if estimates of the procurement costs of weapon systems prove to be too optimistic, DOD will need to rebalance the procurement plans to match the available resources. This action would likely result in further program adjustments and extensions. Concerning aircraft procurement projections, we continue to believe that a clearer understanding of DOD’s long-term budgetary assumptions—including specific, realistic projections of funding availability and planned aircraft procurement spending—is necessary to determine the overall affordability of DOD’s aircraft investment strategy. Without this information, neither DOD nor the Congress will have reasonable assurances that the long-term affordability of near-term procurement decisions has been adequately considered. We gathered, assembled, and analyzed historical data on the overall defense budget, the services’ budget shares, the procurement budgets, and the aircraft procurement budgets. Much of this data was derived from DOD’s historical FYDP databases. We did not establish the reliability of this data because the FYDP is the most comprehensive and continuous source of current and historical defense resource data. The FYDP is used extensively for analytical purposes and for making programming and budgeting decisions at all DOD management levels. In addition, we reviewed historical information and studies—ours, CBO, and others—on program financing and affordability. We also gathered, assembled, and analyzed DOD-generated data on its aircraft programs and supplemented that, where necessary, with data from CBO. We reviewed DOD’s detailed positions on the affordability of its aircraft modernization programs, as presented to the Congress in a June 1996 hearing. We followed up with DOD and service officials on key aspects of that position. Our analysis included tactical aircraft, bombers, transports, helicopters, other aircraft purchases and major aircraft modification programs. This approach removes any cyclical effects on the investment in aircraft by allowing us to view the overall amount invested, as well as the major subcomponents of that investment. We focused on procurement figures and excluded research and development costs because we could not forecast what development programs DOD will undertake over the course of the next 20 to 30 years. We used DOD’s projections for the costs of these aircraft programs (except for the JSF costs, which are CBO projections based on DOD unit cost goals) and did not project cost increases, even though cost increases have occurred in almost all previous aircraft procurement programs. All dollar figures are in constant 1997 dollars, unless otherwise noted. The National Defense Authorization Act for Fiscal Year 1997 required DOD to conduct a Quadrennial Defense Review. As part of the review, DOD assessed a wide range of issues, including the defense strategy of the United States and the force structure required. As a result, DOD may reduce the quantities procured of some weapons programs. The details of how DOD plans to implement the recommendations of the Quadrennial Defense Review will not be available until the fiscal year 1999 budget is submitted to the Congress. Our analysis, therefore, does not take into account the potential effect of implementing the recommendations of the Quadrennial Defense Review. We performed our work from March 1996 to July 1997 in accordance with generally accepted government auditing standards. As agreed with your offices, we plan no further distribution of this report until 30 days from its issue date unless you publicly announce its contents earlier. At that time, we will send copies to other congressional committees; the Secretaries of Defense, the Army, the Navy, and the Air Force; the Commandant of the Marine Corps; the Director, Office of Management and Budget; and other interested parties. We will also make copies available to others upon request. Please contact me at (202) 512-4841 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix III. Marine Corps aircraft. A single-piloted, light-attack, vertical/short take-off and landing aircraft used primarily for responsive close air support. This is a remanufacture program that converts older versions to the most recent production version and provides night fighting capability. Air Force aircraft. A new production aircraft that modernizes the airlift fleet. It will augment the C-5, C-141, and C-130 aircraft; carry outsize cargo into austere airfields; and introduce a direct deployment capability. Army helicopter. A new production, 24-hour, all-weather, survivable aerial reconnaissance helicopter to replace the AH-1, OH-6, and OH-58A/C helicopters and complement the AH-64 Apache. A little more than one-third of the total production aircraft will be equipped with Longbow capability. Air Force aircraft. A new production, medium-range, tactical airlift aircraft designed primarily for transport of cargo and personnel within a theater of operations. This model uses latest technology to reduce life-cycle costs and has more modern displays, digital avionics, computerized aircraft functions, fewer crew members, and improved cargo handling and delivery systems. Navy aircraft. A new production, all-weather, carrier-based airborne Combat Information Center providing tactical early warning, surveillance, intercept, search and rescue, communications relay, and strike and air traffic control. Air Force aircraft. A major modification to provide the Air Combat Command with new and improved capabilities for the AWACS radar. It involves both hardware and software changes to the AWACS. Air Force aircraft. A new production, next-generation stealthy air superiority fighter with first-look, first-kill capability against multiple targets. It will replace the F-15C aircraft in the air superiority role. Navy aircraft. A new-production, major model upgrade to the F/A-18C/D multimission tactical aircraft for Navy fighter escort, interdiction, fleet air defense, and close-air support mission requirements. Planned enhancements over the F/A-18C/D include increased range, improved survivability, and improved carrier suitability. It will replace F/A-18C/D models, A-6, and F-14 aircraft. Marine Corps helicopter. An upgrade to the Marine Corps AH-1W attack and UH-1N utility versions of this helicopter to convert both versions from 2-bladed to 4-bladed rotor systems and provide the attack version with fully integrated cockpits. The attack version provides close air support, anti-armor, armed escort, armed/visual reconnaissance and fire support coordination under day/night and adverse weather conditions. The utility version provides day/night and adverse weather command and control, combat assault support, and aeromedical evacuation. Air Force and Army aircraft. (Joint Surveillance Target Attack Radar System) A new production joint surveillance, battle management and targeting radar system on a modified E-8 aircraft that performs real time detection and tracking of enemy ground targets. Air Force and Navy aircraft. A new production, next-generation, multimission strike fighter. It will replace the Air Force’s F-16 and A-10, the Marine Corps’ AV-8B and F-18A/C/Ds, and be a “first-day survivable complement” to the Navy’s F-18 C/D and E/F aircraft. (continued) Air Force and Navy aircraft. (Joint Primary Aircraft Training System) A new production joint training aircraft and ground based training system, including simulators, that replaces the Air Force T-37B trainer aircraft, Navy T-34C trainer aircraft, and their associated ground systems. Army helicopter. A modification program to develop and provide weapons enhancements to the AH-64 Apache attack helicopter. The Longbow program will provide a fire-and-forget Hellfire missile capability to the AH-64 Apache helicopter that can operate in night, all-weather, and countermeasures environments. Navy helicopter. A Block II weapon systems upgrade of the Navy version of the Army Black Hawk to enhance mission areas performance. It is a twin-engine medium lift, utility or assault helicopter performing anti-submarine warfare, search and rescue, anti-ship warfare, cargo lift, and special operations. Navy aircraft. A strike pilot training system to replace the T-2C and TA-4J for strike and E2 and C2 pilots. It includes the T-45A aircraft, simulators, and training equipment and materials. Army helicopter. A new production, twin-engine air assault, air cavalry, and aeromedical evacuation helicopter that transports up to 14 troops and equipment into battle. It continues to replace the UH-1H Iroquois helicopter. Navy, Marine Corps, and Air Force aircraft. A new production, tilt-rotor, vertical take-off, and landing aircraft designed to provide amphibious and vertical assault capability to the Marine Corps and replace or supplement troop carrier and cargo helicopters in the Marines, the Air Force, and the Navy. The following are our comments on the Department of Defense’s (DOD) letter dated June 8, 1997. 1. Although the Quadrennial Defense Review report recommended that adjustments be made to the number of aircraft to be procured and the rates at which they are to be procured, the report projected that additional procurement funding would be made available through base closures and other initiatives to reduce defense infrastructure and reengineer business practices. The details of these initiatives are not expected to be available until the fiscal year 1999 budget is submitted to the Congress. At this time, the availability of savings from planned initiatives is not clearly evident. 2. The Quadrennial Defense Review does not provide sufficiently detailed projections to judge the affordability of DOD’s new aircraft procurement plans by comparing the long-term funding expected to be available with the funding needed to fully implement those plans. We continue to believe that this type of long-term projection is needed by both DOD and the Congress to ensure that DOD’s aircraft procurement programs are clearly affordable as planned through the span of procurement. 3. We continue to believe that the $17 billion increased cost of procuring F/18-E/F aircraft compared to F/A-18C/Ds is not warranted by the limited increases in performance that would be obtained. We recognize that, while the F/A-18E/F will provide some improvements over the F/A-18C/D, most notably in range, the F/A-18C/D’s current capabilities are adequate to accomplish its assigned missions. Our rebuttals to DOD’s specific comment are contained in our report, Naval Aviation: F/A-18E/F Will Provide Marginal Operational Improvement at High Cost (GAO/NSIAD-96-98, June 18, 1996). 4. Although procurement rates for F-22s during the planned low-rate initial production period were to be lowered in accordance with the Quadrennial Defense Review report, we continue to believe that the degree of overlap between development and production of the F-22 is high and that procurement of F-22s should be minimized until the aircraft demonstrates that it can successfully meet the established performance requirements during operational testing and evaluation. There has also been congressional concern about the cost and progress of the F-22 program. The Senate has initiated legislation to require us to review the F-22 development program annually. 5. We clarified the language in the report to more explicitly recommend that long-term projections of the availability of funds should be used as a guide to assess the likely availability of funds to carry out a program at the time of the procurement approval decision. The Quadrennial Defense Review recognized that more procurement dollars were being planned to be spent than were likely to be available over the long term. Our intent in making this recommendation is to recognize the difficulty DOD and the Congress face and to suggest some solid analysis that would aid in evaluating the long-term commitments that are inherent in nearer term decisions to procure weapon systems. A better understanding of the long-term budgetary assumptions underlying near-term decisions would clearly aid both DOD and the Congress in ensuring that needed weapon systems are affordable in both the near and long term. Combat Air Power: Joint Assessment of Air Superiority Can Be Improved (GAO/NSIAD-97-77, Feb. 26, 1997). B-2 Bomber: Status of Efforts to Acquire 21 Operational Aircraft (GAO/NSIAD-97-11, Oct. 22, 1996). Air Force Bombers: Options to Retire or Restructure the Force Would Reduce Planned Spending (GAO/NSIAD-96-192, Sept. 30, 1996). U.S. Combat Air Power: Aging Refueling Aircraft Are Costly to Maintain and Operate (GAO/NSIAD-96-160, Aug. 8, 1996). Combat Air Power: Assessment of Joint Close Support Requirements and Capabilities Is Needed (GAO/NSIAD-96-45, June 28, 1996). U.S. Combat Air Power: Reassessing Plans to Modernize Interdiction Capabilities Could Save Billions (GAO/NSIAD-96-72, May 13, 1996). Combat Air Power: Funding Priority for Suppression of Enemy Air Defenses May Be Too Low (GAO/NSIAD-96-128, Apr. 10, 1996). Navy Aviation: AV-8B Harrier Remanufacture Strategy Is Not the Most Cost-Effective Option (GAO/NSIAD-96-49, Feb. 27, 1996). Future Years Defense Program: 1996 Program Is Considerably Different From the 1995 Program (GAO/NSIAD-95-213, Sept. 15, 1995). Aircraft Requirements: Air Force and Navy Need to Establish Realistic Criteria for Backup Aircraft (GAO/NSIAD-95-180, Sept. 29, 1995). Longbow Apache Helicopter: System Procurement Issues Need to Be Resolved (GAO/NSIAD-95-159, Aug. 24 1995). Comanche Helicopter: Testing Needs to Be Completed Prior to Production Decisions (GAO/NSIAD-95-112, May 18, 1995). Cruise Missiles: Proven Capability Should Affect Aircraft and Force Structure Requirements (GAO/NSIAD-95-116, Apr. 20, 1995). Army Aviation: Modernization Strategy Needs to Be Reassessed (GAO/NSIAD-95-9, Nov. 21, 1994). Future Years Defense Program: Optimistic Estimates Lead to Billions in Overprogramming (GAO/NSIAD-94-210, July 29, 1994). Continental Air Defense: A Dedicated Force Is No Longer Needed (GAO/NSIAD-94-76, May 3, 1994). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO reviewed the Department of Defense's (DOD) aircraft acquisition investment strategy, focusing on: (1) DOD's and the Congressional Budget Office's estimates of the annual funding needed for aircraft programs, as a percentage of the overall DOD budget, and a comparison of that percentage to a long-term historical average percentage of the defense budget; (2) the potential long-term availability of funding for DOD's planned aircraft procurements; and (3) DOD's traditional approach to resolving funding shortfalls. GAO noted that: (1) to meet its future aircraft inventory and modernization needs, DOD's current aircraft investment strategy involves the purchase or significant modification of at least 8,499 aircraft in 17 aircraft programs, at a total procurement cost of $334.8 billion (fiscal year 1997 dollars) through their planned completions; (2) DOD has maintained that its investment plans for aircraft modernization are affordable within expected future defense budgets; (3) DOD had stated earlier that sufficient funds would be available for its aircraft programs based on its assumptions that: (a) overall defense funding would begin to increase in real terms after fiscal year (FY) 2002; and (b) large savings would be generated from initiatives to downsize defense infrastructure and reform the acquisition process; (4) DOD's aircraft investment strategy may be unrealistic in view of current and projected budget constraints; (5) recent statements by DOD officials, as well as congressional projections, suggest that overall defense funding will be stable, at best, for the foreseeable future; (6) DOD's planned funding for the 17 aircraft programs in all but one year between FY 2000 and 2015 exceeds the long-term historical average percentage of the budget devoted to aircraft purchases and, for several of those years, approaches the percentages of the defense budget reached during the peak Cold War spending era of the early-to-mid-1980s; (7) the amount and availability of savings from infrastructure reductions and acquisition reform, two main claimed sources for increasing procurement funding, are not clearly evident today; (8) GAO's recent reviews of these initiatives indicate there are unlikely to be sufficient savings available to offset projected procurement increases; (9) to deal with a potential imbalance between procurement funding requirements and the available resources, DOD may need to: (a) reduce planned aircraft funding and procurement rates; (b) reduce funding for other procurement programs; (c) implement changes in force structure, operations, or other areas; or (d) increase total defense funding; (10) DOD has historically made long-term commitments to acquire weapon systems based on optimistic procurement profiles and then significantly altered those profiles because of insufficient funding; and (11) to avoid or minimize affordability problems, DOD needs to bring its aircraft investment strategy into line with more realistic, long-term projections of overall defense funding, as well as the amount of procurement funding expected to be available for aircraft purchases. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Under NCLBA, SES primarily includes tutoring provided outside of the regular school day that is designed to increase the academic achievement of economically disadvantaged students in low-performing Title I schools. These services must consist of high-quality, research-based instruction that aligns with state educational standards and district curriculum. Title I of ESEA, as amended and reauthorized by NCLBA, authorizes federal funds to help elementary and secondary schools establish and maintain programs that will improve the educational opportunities of economically disadvantaged children. Title I is the largest federal program supporting education in kindergarten through 12th grade, supplying $12.7 billion in federal funds in fiscal year 2006. According to Education, during the 2005-06 school year, nearly all U.S. school districts and approximately half of public schools received some Title I funding. In addition, the latest national data available from Education counted 16.5 million students as Title I participants in the 2002-2003 school year. Title I funds are distributed by formula to state education agencies, which retain a share for administration and school improvement activities before passing most of the funds on to school districts. Districts are required to distribute Title I funds first to schools with poverty rates over 75 percent, with any remaining funds distributed at their discretion to schools in rank order of poverty either districtwide or within grade spans. A school’s Title I status can change from year to year because school enrollment numbers and demographics may vary over time. Enactment of NCLBA strengthened accountability by requiring states and schools to improve the academic performance of their students so that all students are proficient in reading and math by 2014. Under NCLBA, each state creates its own content standards, academic achievement tests, and proficiency levels. In 2005-2006, states were required to test all children for reading and mathematics achievement annually in grades 3-8 and once in high school to determine whether schools are making adequate yearly progress (AYP). In addition to meeting the state’s performance goals by grade, subject, and overall student population, schools are responsible for meeting those goals for designated groups. These groups are students who (1) are economically disadvantaged, (2) are part of a racial or ethnic group that represents a significant proportion of a school’s student population, (3) have disabilities, or (4) have limited English proficiency. To make AYP, each school must also show that each of these groups met the state proficiency goals for both reading and math. In addition, schools must show that at least 95 percent of students in grades required to take the test have done so. Schools must also demonstrate that they have met state targets for at least one other academic indicator, including graduation rate in high schools and a state-selected measure in elementary or middle schools. For Title I schools that do not meet state AYP goals, NCLBA requires the implementation of specific interventions, and these interventions must continue until the school has met AYP for 2 consecutive years. Table 1 outlines the interventions applied after each year a Title I school misses state performance goals. At their discretion, states may also implement interventions for public schools that do not receive Title I funds and do not make AYP. Although districts are not required to offer SES until a Title I school has missed performance goals for 3 years, because some schools had not met state goals set under ESEA before the enactment of NCLBA, some Title I schools were first required to offer SES in 2002-2003, the first year of NCLBA implementation. States are also required to establish and implement AYP standards for school districts based on the performance of all of the schools in the district. If districts fail to meet these standards for 2 consecutive years, states may classify districts as needing improvement. A district identified for improvement must develop and implement an improvement plan and remain in this status until it meets AYP standards for 2 consecutive years. If a district remains in improvement status for 2 or more years, it may be identified for corrective action as deemed necessary by the state. Students are eligible for SES if they attend Title I schools that have missed AYP for 3 consecutive years and are from low-income families. School districts must determine family income on the same basis they use to make allocations to schools under Title I, for which many have historically used National School Lunch Program (NSLP) data. The NSLP is a federally funded program that annually collects family income data from students’ parents to determine student eligibility for free and reduced-priced lunch. A student’s state assessment scores, grades, and other academic achievement information are generally not considered when determining SES eligibility. However, if sufficient funds are not available to provide SES to all eligible children, school districts must give priority to the lowest-achieving eligible students. SES providers may include nonprofit entities, for-profit entities, school districts, public schools, public charter schools, private schools, public or private institutions of higher education, educational service agencies, and faith-based organizations. Under the Title I regulations that govern SES, a district identified as in need of improvement or corrective action may not be an SES provider, though its schools that are not identified as needing improvement may. In addition, individual teachers who work in a school or district identified as in need of improvement may be hired by any state- approved provider to serve as a tutor in its program. A district must set aside an amount equal to 20 percent of its Title I allocation to fund both SES and transportation for students who elect to attend other schools under school choice. This set-aside cannot be spent on administrative costs for these activities, and the district may reallocate any unused set-aside funds to other Title I activities after ensuring all eligible students have had adequate time to opt to transfer to another school or apply for SES. Funding available for SES is, therefore, somewhat dependent on costs for choice-related transportation, though as we found in our 2004 report on NCLBA’s school choice provisions, few students are participating in the school choice option. If a district does not incur any choice-related transportation costs, it must use the full 20 percent set- aside amount to pay for SES if sufficient demand for services exists. In addition, if the Title I set-aside is not sufficient to fund SES for interested students, both states and districts may direct other funds for these services at their discretion. For each student receiving SES, a district must spend an amount equal to its Title I per-pupil allocation or the actual cost of provider services, whichever is less. Education oversees SES implementation by monitoring states and providing technical assistance and support. OII leads SES policy development and coordinates the publication of SES guidance, and OESE oversees and monitors Title I, including SES. NCLBA and the Title I regulations and SES guidance outline the roles and responsibilities states, school districts, parents, and service providers have in ensuring that eligible students receive additional academic assistance through SES (see table 2). During the 2005-2006 school year, Education announced the implementation of two pilot programs intended to increase the number of eligible students receiving SES and generate additional information about the effectiveness of SES on students’ academic achievement. In the first, Education permitted four districts in Virginia to offer SES instead of school choice in schools that are in their first year of needs improvement. In the second, Education entered into flexibility agreements with the Boston and Chicago school districts, enabling them to act as SES providers while in improvement status. OII and OESE coordinated implementation of the pilots for the department. Both pilots were subject to review at the end of the 2005-2006 school year, at which time Education planned to evaluate their effect on student academic achievement. SES participation increased between 2003-2004 and 2004-2005, and most students receiving services were among the lower achieving students in school. Districts have taken multiple actions to encourage participation, such as offering services on or near the school campus or at various times. Despite these efforts, challenges to increasing participation remain, including notifying parents in a timely and effective manner, ensuring there are providers to serve certain areas and students, and encouraging student attendance. Nationally, the participation rate increased substantially from 12 percent of eligible students receiving SES in 2003-2004 to 19 percent in 2004-2005. In addition, the number of students receiving services almost quadrupled between 2002-2003 and 2004-2005 from approximately 117,000 to 430,000 students nationwide, based on the best available national data (see fig. 1). This increase may be due in part to the increase in the number of schools required to offer SES over that time period. Specifically, between 2004- 2005 and 2005-2006 the number of schools required to offer SES increased from an estimated 4,509 to 6,584. Although nationally SES participation is increasing, some districts required to offer SES have no students receiving services. Specifically, we estimate that no students received services in about 20 percent of the approximately 1,000 districts required to offer SES in 2004-2005. A majority of these districts were rural or had a total enrollment of fewer than 2,500 students. Our survey did not provide sufficient information to explain why these districts had no students receiving services in 2004- 2005; therefore, it is unclear whether their lack of participation was related to district SES implementation or other issues. Nationwide, we estimate that districts required to offer SES spent the equivalent of 5 percent of their total Title I funds for SES in 2004-2005 excluding administrative expenditures. Districts set aside an amount equal to 20 percent of their Title I funds for SES and choice-related transportation at the beginning of the school year, and the proportion of the set-aside spent on SES varied by district. Specifically, in 2004-2005, about 40 percent of districts spent 20 percent or less of the set-aside to provide SES and almost one-fifth of districts spent over 80 percent. Nationwide, of the total amount districts set-aside for SES, we estimate they spent 42 percent on SES, excluding administrative expenditures. Further, an estimated 16 percent of districts reported that the required Title I set-aside was not sufficient to fund SES for all eligible students whose parents requested services. For example, during our site visit to Newark, N.J., district officials reported budgeting the entire 20 percent Title I set-aside to fund SES in 2004-2005, but with this amount of funding, the district was only able to fund SES for 17 percent of the students eligible for services. In addition, according to Chicago, Ill., district officials, the district budgeted the entire 20 percent Title I set-aside to fund SES in 2005-2006, and because parents’ demand for services significantly exceeded the amount of funding available, the district also allocated $5 million in local funds to provide SES. While approximately 1,000 of the over 14,000 districts nationwide were required to offer SES in 2004-2005, SES recipients are concentrated in a small group of large districts, as 56 percent of recipients attended school in the 21 districts required to offer SES with more than 100,000 total enrolled students (see fig. 2). Further, states ranged from having 0 districts to 257 districts required to offer SES in 2004-2005, with most states having fewer than 10 districts required to offer SES. State differences in the number of districts required to offer SES may have resulted from differences in performance or differences in state proficiency standards and methods used to measure adequate yearly progress. Students receiving SES in 2004-2005 shared certain characteristics, as districts reported that most students receiving services were among the lower achieving students in school. Specifically, an estimated 91 percent of the districts that reviewed the academic records of students receiving SES classified most or all of the students receiving SES as academically low achieving. For example, Hamilton County, Tenn., school officials said that students receiving SES are frequently behind grade level in their skills and require special attention to increase their academic achievement. Further, we estimate that over half of SES recipients were elementary school students in the majority of districts and about 60 percent of schools required to offer SES in 2004-2005 were elementary schools. Districts varied in the percentage of students with limited English proficiency receiving services. In about one-third of districts, less than 5 percent of SES recipients were students with limited English proficiency; however, in about one-fifth of districts, over half of SES recipients were students with limited English proficiency. Students with disabilities made up less than 20 percent of students receiving services in about two-thirds of districts. Finally, in some districts, the majority of SES recipients were African- American or Hispanic. In about 40 percent of districts, over half of SES recipients were African-American, and in about 30 percent of districts, over half of SES recipients were Hispanic. Because we were unable to obtain comparable data on the characteristics of Title I students enrolled in these districts in 2004-2005, we were unable to determine whether certain groups of students were underserved. We estimate that about 2,800 providers delivered services to students nationwide in 2004-2005, and more providers were available to deliver services in the districts with the largest student enrollments. Specifically, about 80 percent of districts had between 1 and 5 providers delivering services in 2004-2005. However, the number of providers delivering services in the 21 districts with more than 100,000 total enrolled students ranged from 4 to 45, and averaged 15 providers per district in 2004-2005. Districts have taken multiple actions to encourage participation, as shown in table 3. In line with the federal statutory requirement that districts notify parents in an understandable format of the availability of SES, over 90 percent of districts provided written information in English, held individual meetings with parents, and encouraged school staff to talk with parents about SES. Some districts collaborated with providers to notify parents. For example, during our site visit, Illinois state officials described a provider and district sharing administrative resources to increase participation, which involved the provider printing promotional materials and the district addressing and mailing the materials to parents. In addition, we estimate that over 70 percent of districts lengthened the period of time for parents to turn in SES applications, held informational events for parents to learn about providers, and provided written information to parents in languages other than English. During our site visit to Woodburn, Ore., district officials reported extending the time parents had to sign up their children for SES and hosting an event where providers presented their programs to parents in English and Spanish. Further, Newark, N.J., district officials told us during our site visit that the district provided transportation for parents to attend informational events to increase participation. Also to encourage participation, an estimated 90 percent of districts offered services at locations easily accessible to students, such as on or near the school campus, and almost 80 percent of districts offered services at a variety of times, such as before and after school or on weekends. For example, Hamilton County, Tenn., worked with providers to offer an early morning tutoring program located at the school site in addition to providing services after school. Providers also reported delivering SES on school campuses and at various times. Specifically, over three-fourths of the 22 providers we interviewed reported delivering services at the school site, although providers also offered services off-site, such as in the home, online, or at the provider’s facility. In addition, providers generally delivered SES after school and some also offered SES at alternative times, such as before school, on weekends, or during the summer. Finally, about one-third of districts provided or arranged for transportation for participating students or worked with a local community partner to raise awareness of the services. For example, in Newark, N.J., the district worked with a local community organization to inform parents and students living in public housing and homeless shelters about SES. States also reported taking actions to increase participation in 2005-2006, as shown in table 4. Regarding parent notification, all states encouraged district staff to communicate with parents about SES. In addition, almost 90 percent of states provided guidance to districts on the use of school campuses for service delivery to encourage participation. Despite some districts’ promising approaches to encourage participation, notifying parents in a timely manner remains a challenge for some districts. An estimated 58 percent of districts did not notify parents that their children may be eligible to receive SES before the beginning of the 2005-2006 school year, which may be due in part to delays in states reporting which schools were identified for improvement. Specifically, about half of districts that did not notify parents before the beginning of the 2005-2006 school year did not receive notification from the state of the schools identified for improvement by that time. Moreover, district officials in three of the states we visited experienced delays in receiving school improvement information, and state officials agreed that providing timely information about whether schools have met state performance goals has been a challenge. Almost all of the districts that did not notify parents before the beginning of the 2005-2006 school year did so within the first 2 months of the year. Effectively notifying parents is also a challenge for some districts. For example, officials in all four districts we visited reported difficulties contacting parents to inform them about SES in part because some families frequently move and do not always update their mailing address with districts. In addition, some providers we interviewed indicated that confusing parental notification letters do not effectively encourage SES participation. For example, some of the providers we interviewed said some districts use confusing and poorly written letters to inform parents of SES or send letters to parents of eligible children but conduct no further outreach to encourage participation in SES. Four of the providers we interviewed also indicated that complicated district enrollment processes can discourage participation. For example, one provider said certain districts send parents multiple documents to complete in order for their child to receive SES, such as an enrollment form to select an SES provider and a separate contract and learning plan. Another challenge to increasing SES participation is attracting more SES providers for certain areas. Some rural districts surveyed indicated that no students received services last year because of a lack of providers in the area. We estimate that the availability of transportation for students attending supplemental services was a moderate, great or very great challenge for about half of rural districts. For example, one rural district commented in our survey that there are no approved providers within 200 miles of its schools. A few other rural districts commented in our survey that it was difficult to attract providers to their area because there were few students to serve or providers had trouble finding staff to serve as tutors. In addition, ensuring there are providers to serve students with limited English proficiency or disabilities has been a challenge for some districts. There were not enough providers to meet the needs of students with limited English proficiency in an estimated one-third of districts, and not enough providers to meet the needs of students with disabilities in an estimated one-quarter of districts. Many states also indicated that the number of providers available to serve these groups of students was inadequate. While over half of the providers we interviewed reported serving students with limited English proficiency or disabilities, some providers served these students on a limited basis and reported difficulties meeting their needs. For example, one provider reported serving few students with limited English proficiency and disabilities because the amount of funding the provider receives for SES was not sufficient to pay for specialized tutors to serve these students. Another provider said it was difficult to find tutors to meet the needs of students with limited English proficiency and its program was not designed for students with disabilities. Another provider said that it was difficult to serve students with disabilities because it required significantly modifying the tutoring lessons to meet their needs. Encouraging student attendance has also been a challenge, in part because students may participate in other afterschool activities, such as sports or work. Low parent and student demand for SES has been a challenge in about two-thirds of districts. For example, about one-quarter of districts reported that both competition from other afterschool programs and the availability of services that are engaging to students were challenges to implementing SES. In addition, providers, district and school officials in all four districts we visited told us that SES is competing for students with extracurricular and other activities. For example, a Chicago, Ill., high school official indicated that student attendance at SES sessions declined significantly as the school year progressed. To address this problem, providers sometimes offer students incentives for participation. For example, while 2 of the 22 providers we interviewed offered incentives for students to sign up for services, 19 providers used incentives to encourage student attendance, such as school supplies and gift certificates. To promote improved student academic achievement, providers aligned their curriculum with district instruction primarily by hiring district teachers and communicating with the teachers of participating students. Providers reported communicating with teachers and parents in person as well as mailing information and progress reports to them; however, districts indicated the extent of provider efforts varied, as some did not contact teachers and parents in 2004-2005. In addition, both providers and districts experienced contracting and coordination difficulties. In part because SES is often delivered in school facilities, providers and officials in the districts and schools we visited reported that involvement of school administrators and teachers can improve SES delivery and coordination. In order to increase student academic achievement, providers took steps to align their curriculum with school instruction by hiring and communicating with teachers, though the extent of their efforts varied. A majority of the 22 providers we interviewed hired certified teachers in the district as tutors. Some providers said hiring district teachers promoted curriculum alignment, in part because district teachers were apt to draw on district curriculum during tutoring sessions. School officials in three of our site visits also said providers’ use of district teachers as tutors helped to align the tutoring program with what the student learned in the classroom. In addition, some providers reported aligning curriculum by communicating with the teachers of participating students to identify student needs and discuss progress. The frequency of contact between tutors and teachers ranged from mailing teachers information once prior to the beginning of the program to contacting teachers at least weekly, according to the providers we interviewed. A few providers also used other methods to align their curriculum with district instruction, such as using the same tests to evaluate student progress and allowing principals to choose components of the tutoring curriculum for students receiving SES in their schools. However, not all providers worked with schools to align curriculum, as we estimate that some, most, or all providers did not contact teachers to align curriculum with school instruction in almost 40 percent of districts in 2004-2005. For example, Woodburn, Ore., district and school officials indicated during our site visit that instead of aligning their services with the district curriculum, certain providers openly questioned the district’s curriculum and teaching methods, which caused confusion among some parents and students. Providers reported mailing information as well as meeting with parents over the phone and in-person to communicate about student needs and progress; however, the frequency of communication with parents varied by provider. A majority of the providers we interviewed communicated with parents about student progress repeatedly, sometimes by sending home progress reports on a monthly basis or holding parent-tutor conferences. The frequency of contact between tutors and parents reported by the 22 providers we interviewed ranged from meeting with parents twice during the tutoring program to giving parents a weekly progress report. A few providers also reported communicating with parents by holding workshops for parents to learn about the SES program and in some cases having the parents sign their students’ learning plans. For example, one provider conducted workshops where parents received reading materials to share with their children and a parent guide in English and Spanish that explained the program and how to use the materials to enhance student learning. Some providers also reported hiring staff dedicated in part to maintaining communication with parents. However, some providers faced difficulties when communicating with parents about SES, such as language barriers or incorrect contact information. Districts confirmed that the degree to which providers communicated with parents varied, as we estimate that some, most, or all providers did not contact parents to discuss student needs and progress in about 30 percent of districts in 2004-2005. Despite these challenges, most districts had positive relationships with providers. Specifically, an estimated 90 percent of districts indicated that their working relationships with providers during 2004-2005 were good, very good, or excellent. In addition, many of the providers we interviewed during our site visits also reported having positive working relationships with district officials. Although other studies have found that districts reported certain difficulties working with providers, relatively few districts reported that their providers signed up ineligible students or billed for services not performed in 2004-2005, as shown in figure 3. Generally, states did not hear about these provider issues very often. Almost half of states said the issue of providers not showing up for SES sessions was rarely brought to their attention. Similarly, half of states said the issue of providers billing the district for services not performed was rarely brought to their attention. In addition, about 40 percent of states said the issue of providers using excessive incentives was rarely brought to their attention. Further, about 40 percent of states said the issue of providers signing up ineligible students rarely arose. Almost one-third of states heard about each of these issues sometimes, while few states had these issues brought to their attention very often. For example, during our site visits, state officials provided examples of issues that had been brought to their attention regarding provider practices, but these issues were often isolated incidents particular to one or a few providers in certain districts. While providers have taken steps to deliver quality services, both providers and districts reported experiencing difficulties during the contracting process. For example, some of the providers we interviewed said certain districts imposed burdensome contract requirements, such as requiring substantial documentation to be submitted with invoices, limiting the marketing they could do to parents and students, or restricting the use of school facilities to deliver services. Specifically, 7 of the 22 providers we interviewed experienced difficulties with districts restricting provider access to school facilities, by for example, not allowing providers to deliver services in school buildings or by charging providers substantial fees to do so. A few providers also said contracting with districts was a resource-intensive process, in part because contract requirements vary by district and state. Some of the multi-state providers we interviewed reported dedicating a team of staff to help them finalize and manage contracts with districts. These providers commented that, while they have the administrative capacity to manage this process, smaller providers may not have such capacity. In addition, one provider that delivered services exclusively online commented that contracting with districts across the country was a challenge, particularly because some states and districts require provider representatives to attend meetings in-person and be fingerprinted in their states. Contracting with providers was also a challenge for some districts. Specifically, negotiating contracts with providers was a moderate, great, or very great challenge in about 40 percent of districts nationwide. For example, Woodburn, Ore., district officials described having contractual discussions with providers around whether the district would charge fees for the use of school facilities, the types of incentives providers could use to encourage students to sign up, and whether the district would pay for services when students did not attend SES sessions. While states may review and define program design parameters as part of the provider approval process, district officials in three of our site visits expressed concern about their lack of authority to set parameters in provider contracts around costs and program design, such as tutor-to-student ratios and total hours of instruction. For example, during our site visit, a Hamilton County, Tenn., district official expressed frustration with providers that charged the maximum per-pupil amount but varied in the level of services provided, such as the number of instructional hours and tutor-to-student ratio. Chicago, Ill., district officials also expressed concern about the variation among providers in the hours of instruction and cost of services because the district does not have sufficient funds to serve all eligible students and officials would like to maximize the number of students they can serve. In part to help address district concerns, in 2005-2006, Illinois required approved providers to submit information on the cost of providing services in each of the districts they served and made this information available to districts and the public in order to improve transparency and accountability. While Tennessee state officials told us they were hesitant to set restrictions on providers and would like more federal direction on this issue, other states have set restrictions on the cost and design of SES programs. For example, Georgia set a maximum tutor-to-student ratio of 1:8 for non-computer based instruction and 1:10 for computer based instruction, and New Mexico adopted a sliding fee scale based on the educational level of tutors. Coordination of service delivery has also been a challenge for providers, districts, and schools. About 70 percent of states reported that the level of coordination between providers, districts, and schools implementing SES was a moderate to very great challenge. Sometimes these coordination difficulties have resulted in service delays. For example, services were delayed or withdrawn in three of the districts we visited because not enough students signed up to meet the providers’ enrollment targets and districts were not aware of these targets. In one district we visited, services were delayed because school teachers hired to be tutors did not provide evidence of their background checks and teaching certificates to providers in a timely manner. Coordination difficulties also occurred during the enrollment process. Though districts are responsible for arranging SES for eligible students, in two districts we visited, both the district and providers sent parents enrollment forms, which caused confusion among parents as well as additional work for the district staff processing the forms. In addition, a few providers told us they do not know how many students they will serve until enrollment forms are returned to district officials, which hinders planning and may delay services since they do not know how many tutors they will need to hire and train to deliver SES in each district. In part because SES can be delivered in school facilities, providers and officials in the districts and schools we visited reported that involvement of school administrators and other staff improves SES implementation. Although schools do not have federally-defined responsibilities for administering SES, many officials said SES implementation is hindered when school officials are not involved. Some providers we interviewed said that a lack of involvement of school principals can make it difficult for them to coordinate with schools to encourage student participation. In addition, a few providers said certain districts contributed to this problem by restricting communication with school officials or not defining a role for schools in SES implementation. Officials in one of the districts we visited also told us that encouraging participation and administering the program was more difficult when they did not designate school staff to assist the district with SES implementation. School officials from all four of our site visits also said the lack of a clear role for school officials, including principals, in administering SES has been a challenge. For example, Illinois and Oregon school principals told us they found it difficult to manage afterschool activities because they didn’t have sufficient authority to oversee SES tutors operating in their buildings at that time. Further, problems can arise when school officials are not fully informed about the SES program. For example, Woodburn, Ore., school officials told us that although the school was not provided SES tutoring schedules for students, parents and students have come to school officials for help when they were unclear about the schedule or when tutors failed to show up. A majority of the providers we interviewed told us that involvement of school principals can improve participation and make it easier to deliver services, in part because principals are familiar with the students and manage school staff. For example, certain providers reported providing principals with information about the tutoring program so that school staff can assist with the enrollment process, involving principals in selecting the curriculum used in their schools, and sending principals student progress reports. In addition, all four districts we visited had school site coordinators to assist with SES, such as helping with the enrollment process and assisting with the day-to-day administration of the SES program in the schools. For example, Woodburn, Ore., district officials told us implementation improved when the district designated school site coordinators to assist with parental notification and events where providers present their programs, and meet with parents and providers to help them complete individual student learning plans. A few providers we interviewed also assigned a staff person at the school site to communicate with teachers and parents. While helping to administer the SES program adds additional administrative burden on schools, school officials in all four of the districts we visited said they welcomed a stronger or more clearly defined role. While state monitoring of SES had been limited, more states reported taking steps to monitor both district and provider efforts to implement SES in 2005-2006. In addition, districts have taken a direct role in monitoring providers, and their monitoring activities similarly increased during this time. Although states are required to withdraw approval from providers that fail to increase student academic achievement for 2 years, many states reported challenges evaluating SES providers. In addition, the few states that have completed an evaluation have not yet produced reports that provided a conclusive assessment of SES providers’ effect on student academic achievement. State monitoring of district SES implementation, which is sometimes performed as part of state Title I monitoring, had been limited prior to 2005-2006, though more states reported conducting on-site reviews of districts in that year. While less than one-third of states conducted on-site reviews of districts to monitor their implementation of SES in 2004-2005, almost three-fourths reported conducting such reviews in 2005-2006. This increase reflects both those states that had already begun monitoring district SES implementation for 2005-2006 at the time of our survey and those states planning to conduct monitoring activities before the end of that school year. Because our data were collected during the middle of the 2005-2006 school year, we do not know whether all of the states that planned to complete these activities before the end of the year did so. In both years, a majority of the states that conducted on-site reviews visited few or some of their districts. Therefore, while more states reported conducting such reviews in 2005-2006 than in 2004-2005, the number of districts per state receiving reviews does not appear to have increased. In addition to on-site reviews, many states also reported reviewing information collected from several other sources to assess district SES implementation in 2005-2006. The most common source used by states was district officials, as almost all states reported reviewing or planning to review information collected from district officials to monitor their implementation of SES in 2005-2006. Further, many states were also collecting information from school principals, parents, and providers to monitor districts, with between 67 and 81 percent of states reviewing or planning to review information collected from these sources in 2005-2006. States also reported reviewing or planning to review information related to several aspects of district SES implementation in 2005-2006. For example, almost all states reported reviewing district notification of parents and SES expenditures, as shown in figure 4. States we visited reported that some districts have had difficulties implementing SES, in part because of district staff capacity limitations and the complexities of administering SES at the local level. When states find that a district is having difficulty implementing SES, most hold a meeting with the district and provide or arrange for assistance, including consultations or training. Half of the states also develop an action plan and time line with the district to help improve its efforts. During our site visits, state officials reported that notifying parents, maintaining a fair and competitive environment for providers, ensuring providers understand their SES roles and responsibilities, and determining an appropriate role for schools continue to challenge some districts as they implement SES. Although states and districts reported increasing their efforts to monitor SES providers between 2004-2005 and 2005-2006, over two-thirds of states reported that on-site monitoring of providers has been a challenge. In addition, several districts commented in our survey that more provider monitoring is needed. During all four of our site visits, state and district officials expressed concerns about their capacity to fully administer and oversee all required aspects of SES implementation, including provider monitoring. Officials explained that state and district capacity to implement SES is limited, because there is typically one staff person at each level coordinating all of SES, and sometimes that person may also oversee implementation of additional federal education programs. Further, several states commented in our survey that additional training, technical assistance, and national monitoring protocols from the federal government would assist their efforts to monitor providers. During our site visits, state officials noted that while they did not initially have structured plans or procedures in place to monitor SES providers, they took steps to develop and formalize procedures starting with the 2004-2005 and 2005-2006 school years. Nationally, in 2004-2005, states monitored providers primarily by collecting data from district officials, though many states reported using a more active monitoring approach in the next year. For example, while less than one-third of states conducted on-site reviews of providers in 2004-2005, over three-fourths had conducted or planned to conduct such reviews in 2005-2006, as shown in figure 5. In addition, while one-third or fewer states reviewed information collected from school staff, parents, and students in 2004-2005, the percentage that reported reviewing or planning to review information collected from these sources more than doubled the next year. Similar to 2004-2005, many states continued to rely on information collected from district officials to monitor providers in 2005-2006, with almost all states reviewing or planning to review information collected from districts in that year. Federal guidance suggests states may request district assistance in collecting data from providers to assist state monitoring activities. While the state is ultimately responsible for monitoring providers, most states reported that districts have taken a direct role in monitoring providers. Similar to states, although district monitoring of providers was limited in 2004-2005, districts used a more extensive and active approach in the next year, as shown in figure 6. For example, while we estimate that less than half of districts collected information from on-site reviews, school staff, parents, and students to monitor providers in 2004-2005, 70 percent or more were collecting or planning to collect information from these sources in 2005-2006. Although states and districts collected information from similar sources to monitor providers, districts collected information from more providers than states. Specifically, while a majority of the states that conducted on- site reviews observed only some or few providers, we estimate that a majority of districts that conducted on-site reviews observed most or all of their providers in 2004-2005. While states and districts may both have capacity limitations that affect their ability to conduct on-site reviews to monitor providers, conducting such reviews is likely easier for districts because services are often delivered on or near school campuses. States and districts collected information on several aspects of SES programs to monitor providers, as shown in table 5. While federal regulations provide states flexibility to design their own SES monitoring systems, over two-thirds or more of states and districts monitored or planned to monitor all program elements listed, including those related to service delivery and use of funds. For example, 94 percent of states monitored or planned to monitor parent or student satisfaction with providers, and 93 percent of districts monitored or planned to monitor billing and payment for services and student attendance records. Many states struggle to develop evaluations to determine whether SES providers are improving student achievement, though states are required to evaluate and withdraw approval from providers that fail to do so after 2 years. Specifically, federal law requires states to develop standards and techniques to evaluate the services delivered by approved providers, but it does not require states to use specific evaluation methods or criteria for determining provider effectiveness. Through our survey, states reported several challenges to evaluating SES providers. Specifically, over three-fourths of states reported that determining sufficient academic progress of students, having the time and knowledge to analyze SES data, and developing data systems to track SES information have been challenges. Further, during our site visits to Illinois and New Jersey, state officials noted they were currently in the process of improving their data collection systems to more effectively capture and analyze data for SES evaluations. In addition, several state officials reported that while they have collected some information to assess provider effectiveness, they have done little with that data. Others noted that they have not received sufficient federal guidance on effective models for SES provider evaluations, and because developing and implementing such evaluations can be both time-consuming and costly, additional assistance from Education would improve their efforts. At the time of our survey in early 2006, only a few states had drafted or completed an evaluation report addressing individual SES provider’s effects on student academic achievement, and we found that no state had produced a report that provided a conclusive assessment of this effect. New Mexico and Tennessee were the only two states that had completed final or draft SES evaluation reports that attempted to assess the impact of all SES providers serving students in their states in previous years. To measure student academic achievement, New Mexico’s report analyzed students’ grades as well as their scores on state assessments and provider assessments, which often differ by provider and are administered both before SES sessions begin and at the end of SES sessions each year. However, the report noted that these assessments produced different results related to student academic achievement gains. While Tennessee also planned to review students’ state assessment scores, the draft available at the time of our analysis did not include this information. In addition, both reports drew on information obtained through other sources, such as teacher surveys, to assess provider effectiveness. Due to their limitations, neither evaluation provided a conclusive assessment of SES providers’ effect on student academic achievement. In addition, at the time of our survey, over half of the states reported that they were in the process of conducting an evaluation of SES providers in order to meet the federal requirement of assessing each provider’s effect on student academic achievement. Similar to the state evaluations already undertaken, officials reported using different methods and criteria to evaluate SES providers. For example, some states were collecting each provider’s pre- and post-SES assessments of students while others were collecting student achievement data from districts or students’ state assessment scores. Further, while one state defined adequate student progress as 80 percent of a provider’s students making one-grade level of improvement after a year of SES, another state defined adequate student progress as 50 percent or more of a provider’s students having any positive academic achievement gain after a year of SES. While these states have yet to produce final results from their SES provider evaluations, it is unclear whether any of these efforts will produce a conclusive assessment of SES providers’ effect on student academic achievement. Likely because states are struggling to complete evaluations to assess SES providers’ effect on student academic achievement, states did not report that they have withdrawn approval from providers because their programs were determined to be ineffective at achieving this goal. Rather, though over 40 percent of states reported that they had withdrawn approval from some providers, they most frequently reported withdrawing provider approval because the provider was a school or district that had entered needs improvement status, the provider asked to be removed from the state-approved provider list, or because of provider financial impropriety. Several offices within Education monitor various aspects of SES activity across the country and provide support, but states and districts reported needing additional assistance and flexibility with program implementation. Education conducts SES monitoring in part through policy oversight and compliance reviews of states and districts, and provides SES support through guidance, grants, research, and technical assistance. However, many states and districts reported needing additional assistance and guidance regarding evaluation and administration of SES. Further, some states and districts voiced interest in expansion of Education’s pilot programs that increase SES flexibility, including the pilot that allows certain low-achieving districts to serve as SES providers. OII and OESE are primarily responsible for monitoring and supporting state and district SES implementation, and other Education offices also contribute to these efforts (see fig. 7). OII, which leads SES policy development and provides strategic direction, monitors SES policy issues primarily through what it calls “desk monitoring.” This monitoring is performed at its federal office and includes the review of SES-related research and media reports. OII also conducts more intensive monitoring of specific SES implementation challenges when states, districts, and providers bring them to Education’s attention and keeps a log documenting these issues. For example, during 2004-2005, Illinois and New Jersey officials contacted OII to clarify guidance regarding providers affiliated with districts in need of improvement, and OII staff provided assistance and clarification. In addition, several providers we interviewed also mentioned that they have contacted OII directly to discuss implementation challenges associated with enrollment, district contracts, and provider access to school facilities. OESE, which oversees and supports NCLBA implementation, is also involved in monitoring SES implementation through its overall monitoring of state compliance with Title I and NCLBA. To monitor Title I, OESE staff visit state departments of education and selected districts within each state to interview officials and review relevant documents. Once the review is complete, OESE issues a report to the state outlining any instances of Title I non-compliance, including those related to SES, and actions needed to comply with regulations. As of June 2006, OESE had visited and issued reports to over three-fourths of the states. In addition to its Title I monitoring, OESE also oversees the collection of state NCLBA data, including data on SES, through the annual Consolidated State Performance Report (CSPR). For the CSPR, each state is required to report the number of schools with students receiving SES, the number of students eligible for services, and the number that received services. However, although almost all states reported that they are collecting information on district SES expenditures as part of their oversight, Education does not require states to submit information on the percent or amount of Title I funds districts spent for SES through the CSPR or other means. Therefore, while Education tracks the extent to which states are providing SES to eligible students, the department does not collect data on the relative costs of serving them. Further, under NCLBA, Education is required to present an annual summary of the CSPR data to Congress. As of June 2006, the most recent report to Congress was for the 2002-2003 school year, though Education officials indicated they expect to submit reports for 2003-2004 and 2004-2005 in the near future. While OII and OESE monitoring of SES has been either limited to desk monitoring or combined with general Title I monitoring, OIG has conducted audits specifically focused on SES. During 2003-2004 and 2004- 2005, the office performed a series of SES implementation audits in six states, which involved site visits to state offices and selected districts within each state. Also during 2004-2005, OIG performed audits of five California districts and one SES provider within each district. These audits included an examination of district SES contracts with providers, provider services, and provider compliance with state SES regulations. Several Education officials reported coordinating internally to share information, including SES monitoring results. To facilitate coordination, OII leads an internal group comprised of staff members from other Education offices, who meet bi-weekly to exchange information. OESE shares its state Title I monitoring results and CSPR data with other Education offices. In addition, OIG makes recommendations to both OII and OESE in its state and district SES auditing reports and disseminates the reports throughout Education and on the department’s Web site. Since 2002, OII has coordinated the publication of four versions of non-regulatory SES guidance, each updated to address ongoing challenges with SES implementation. The latest and most comprehensive version of non-regulatory SES guidance was published in June 2005. In May 2006, Education issued a separate supplement to the guidance containing additional information on private school participation in providing SES and a policy letter clarifying the definition of a district-affiliated provider. In addition to its monitoring efforts, OII also provides SES implementation assistance, in part through presentations at conferences, and through grants to external organizations that assist states and districts. For example, OII staff have presented information on SES policy and promising practices at national meetings attended by SES coordinators and others involved in SES implementation. In addition, the office has issued grants to the Black Alliance for Educational Options, the Hispanic Council for Reform and Educational Options, and through the Star Schools Program to promote SES to minority students and those in rural areas. Further, OII funded the Supplemental Educational Services Quality Center (SESQC), which offered SES technical assistance through tool-kits, issue briefs, and a Web site containing SES information for state and district officials, schools, parents, and providers. SESQC also periodically convened representatives of organizations working on education issues to discuss SES national coordination, challenges, and promising practices. However, those meetings and all SESQC activities were discontinued in December 2005 when SESQC’s grant period ended. Other Education offices also provide SES support through various means. For example, OESE funded the Comprehensive Centers Program through grants that established technical assistance centers across the country to help low-performing schools and districts close achievement gaps and meet the goals of NCLBA. Of these, the Center on Innovation and Improvement provides support to regional centers that assist states with Education’s programs, including SES. In addition, Education’s Policy and Program Studies Service, within the Office of Planning, Evaluation and Policy Development, oversees several research studies that examine SES, either in whole or in part. These reports, such as the National Assessment of Title I: Interim Report and Case Studies of Supplemental Services under the No Child Left Behind Act, provide states and districts with information on SES implementation, challenges, and promising practices. Further, Education’s Center for Faith-Based and Community Initiatives offers technical assistance to faith- and community-based organizations interested in becoming state-approved SES providers. Given the technical assistance and support Education has already provided to states and districts for implementation of SES and school choice, and the department’s view that implementation of these provisions has been uneven throughout the country, in May 2006, Education issued a policy letter announcing the department’s plans to take significant enforcement action. Specifically, Education plans to use the data collected through its monitoring and evaluation efforts to take enforcement actions such as placing conditions on state Title I grants, withholding federal funds, or entering into compliance agreements. In the letter, the department noted that its various monitoring activities have identified several areas of noncompliance with SES requirements. For example, the OIG’s audits found that each of the six states reviewed failed to adequately monitor their districts for compliance. Consequently, nearly all of the parental notification letters reviewed failed to include the required key components, and several districts failed to budget sufficient funding for services. Through our own analysis of Education’s monitoring reports, we also found that some of the states reviewed were found to have inadequate or incomplete processes for monitoring and evaluating SES providers. Despite Education’s efforts, many states and districts reported needing more information and assistance to better comply with certain aspects of SES implementation, including SES evaluation (see table 6). Specifically, 85 percent of states and an estimated 70 percent of districts needed additional assistance with methods for evaluating SES, and over 60 percent also needed assistance with developing data systems. Many districts also needed more information on provider quality and effectiveness. Although OESE and OIG monitoring results have also continually indicated that states and districts struggle with SES evaluation, Education has yet to provide comprehensive assistance in this area, and during our site visits, officials mentioned that they have been relying on other states, organizations, or individuals for evaluation assistance. States and districts also indicated a need for more support and technical assistance to help them administer SES. Specifically, approximately three-fourths of states and two-thirds of districts reported needing funding to increase their capacity to implement SES. Many states also reported needing tool kits and model/sample documents, as well as training from Education, and a majority of districts needed effective parent outreach strategies. Further, most states reported a need for conferences or meetings where they could share lessons learned and promising practices with other states. A few Tennessee officials mentioned that conferences hosted by national organizations have been an effective means of allowing SES officials to gather and share knowledge. While three-fourths of states reported that the most recent version of Education’s SES guidance, published in June 2005, has been very or extremely useful, several states commented through our survey that they needed additional or clearer guidance on certain SES provisions. This included guidance on managing SES costs and fees, implementing SES in rural areas, and handling provider complaints. During three of our site visits, officials also expressed some concern about the lack of clarity in the SES guidance with regards to student eligibility requirements and how to craft a parental SES notification letter that is both complete and easy for parents to understand. Regarding parental notification letters, though both OESE and OIG found many states and districts to be non-compliant with the federal requirement that district SES parental notification letters include several specific elements, Education’s SES guidance, which is coordinated by OII, provides a sample that does not clearly specify all of the key elements required by SES law and regulations. For example, the sample letter does not include information on provider services, qualifications, and effectiveness. Furthermore, a few state and district officials commented that, when followed, the Title I regulations governing SES yield a letter that is unreasonably long and complex, which may be difficult for parents to understand. Many states and districts expressed interest in the flexibility offered through two pilot programs that Education implemented during 2005-2006. The department designed these pilots to increase the number of eligible students receiving SES and to generate additional information about the effect of SES on student academic achievement. For example, several state and district SES coordinators expressed interest in Education’s pilot program that allowed two districts in needs improvement status to act as SES providers in exchange for their expansion of student access to SES providers and collection of achievement data to determine SES program effectiveness. During three of our four site visits, state and district officials expressed concern that districts identified for needs improvement are excluded from delivering SES, and one state official noted that removing districts from the state approved provider list may result in lower SES participation. Further, in our surveys, three state SES coordinators and 17 district SES coordinators wrote in comments that permitting districts in needs improvement status to provide services would assist their efforts. Through both our surveys and site visits, officials suggested that allowing districts to act as providers may ease student access to SES for rural districts that do not have providers located nearby, allow more students to participate in SES because district costs to provide services are sometimes lower than other providers’ costs, and enable districts to continue their existing tutoring programs that they feel are effective and meet the same goals as SES. Overall, we estimate that 15 percent of districts were state- approved providers in 2004-2005. However, another national survey recently found the percentage of urban and suburban districts that are state-approved SES providers is declining. The other SES pilot allowed four districts in Virginia to offer SES instead of school choice in schools that have missed state performance goals for 2 years and are in their first year of needs improvement. During our site visits and through our surveys, many states and districts expressed interest in adjusting the order of the SES and school choice interventions. Specifically, half of states and over 60 percent of districts suggested that SES should be made available before school choice (see table 7). Further, approximately three-quarters of both states and districts indicated that SES should be offered either before or simultaneously with choice. As we found through our previous work on school choice, few students are opting to transfer schools in the first year of needs improvement, and therefore this change would provide students with another option to receive additional academic support in that year. Further, in a recent national study, district and school officials noted that parents and students are often not interested in changing schools, in part because of potential long commutes and satisfaction with their current schools, which suggests that parents and students would likely prefer to receive SES in their own schools and neighborhoods rather than school choice. In line with interest in increased flexibility with these interventions, in May 2006, Education announced that due to the positive results in Virginia districts under the pilot, the department plans to extend and expand this pilot in 2006-2007. Over the last few years, almost all states and approximately 1,000 districts have been required to implement SES across the country and, if current trends continue, more schools will be required to offer services in the future. Although some states and districts are beginning to gain experience in implementing SES and use promising approaches to increase SES participation, many students are still not receiving services, in part because providers are sometimes not available to serve certain areas and groups. In addition, some districts are unsure how to involve school officials in facilitating local coordination of SES implementation and providing effective parental notification. While Education has provided support to states and districts through guidance and technical assistance, many report needing additional assistance to address these challenges. Further, the lack of clarity between policy guidance issued by OII and criteria used by OESE in their compliance reviews of states’ implementation efforts creates additional challenges in meeting the federal requirements for parental notification letters. Providing states with clear guidance that has been coordinated across Education offices is especially important now that the department has announced plans to take significant enforcement actions to ensure states comply with federal SES requirements. While some districts do not have any students receiving services and, therefore, are not spending any Title I funds for SES, other districts are spending more than their entire set-aside and still have students on waiting lists to receive services. Two districts have been able to participate in Education’s pilot program waiving federal regulations that preclude districts in need of improvement from providing SES, which may help them provide services to more students at a lower cost. However, extending this flexibility to more districts depends on the evaluation of the quality of these services to determine if SES is having a positive impact on student academic achievement. In addition, the absence of strategies that states can use to set parameters around program design and costs further hinders their ability to stretch available funding to serve more students. Federal and state oversight of district efficiency in using federal funds to provide SES services is hindered by incomplete reporting requirements that require states to report on the number of eligible children receiving SES, but not the data they collect on the amount of Title I funding used to serve them. This information gap limits Education’s ability to track state and district compliance in spending funds for SES. Further, Education’s ability to ensure that federal dollars are effectively spent to improve student academic achievement is limited until states increase their capacity to monitor and evaluate provider performance. In the absence of additional federal technical assistance and access to information about state and district promising practices, some states may continue to struggle with implementation and evaluation of SES. To help states and districts implement SES more effectively, we recommend that the Secretary of Education use the department’s Web site and the Center on Innovation and Improvement, as well as other means of communication, to: Provide federal guidance on SES parental notification letters that is clear and has been coordinated internally between OII and OESE to provide additional assistance to states and districts to help them comply with federal requirements and ensure that letters are easy for parents to understand. Education might consider providing several samples of actual district notification letters that meet these criteria. Collect and disseminate information on promising practices used by states to attract more providers for certain areas and groups and promising practices used by districts to improve parental notification of SES services and providers’ ability to serve specific groups of students and to encourage student attendance. Provide examples of how districts can involve schools and school officials to facilitate local coordination with providers. To improve states’ and districts’ ability to make the most of funding for SES and provide services to the maximum number of students, we recommend that the Secretary of Education: Consider expanding the 2005-2006 pilot that allowed two districts in need of improvement to enter into flexibility agreements to serve as SES providers if evaluation results show that these districts can provide quality SES services. Clarify what states can do through the provider approval process to set parameters around program design and costs. For example, Education could issue guidance to states that clarifies their authority to set parameters on issues such as minimum hours of SES per student, minimum tutor qualifications, and cost ranges. In addition, Education could suggest to states that they coordinate these discussions with districts to address their concerns about program design and costs. To improve federal and state monitoring of SES, we recommend that the Secretary of Education: Require states to report information necessary to determine the amount of funds spent by districts to provide SES and the percentage of their Title I allocations that this amount represents. Because almost all states reported that they are planning to monitor district SES expenditures, Education could require the states to submit these data through the annual NCLBA Consolidated State Performance Reports. Provide states with technical assistance and guidance on how to evaluate the effect of SES on student academic achievement. For example, Education might require the Center on Innovation and Improvement to focus its SES assistance on providing states with suggested evaluation methods, measures to assess the impact of SES on achievement, and criteria for using this information to monitor and withdraw state approval from providers. Further, lessons learned and promising practices on evaluation could also be shared with states on the Center on Innovation and Improvement’s Web site or during national or regional meetings, trainings, or conferences. We provided a draft of this report to Education for review and comment. Educations’ written comments are reprinted in appendix II, and the department's technical comments were incorporated into the report as appropriate. In its written comments, Education expressed appreciation for the report’s recommendations and cited actions the department has already initiated or plans to take in addressing them. Specifically, Education noted several projects under development that might assist in carrying out our recommendations to provide more assistance to states on notifying parents, attracting providers for certain areas and groups, and involving schools. The department also said that is currently considering expanding the pilot allowing districts in need of improvement to apply to become SES providers, per our recommendation. Regarding our recommendation that Education clarify what states can do to set parameters around program design and costs, Education said it would consider addressing this issue further in the next set of revisions to the SES non-regulatory guidance. In addition, Education said it would address our recommendations to improve federal and state monitoring of SES by proposing that districts report on their SES spending and by providing more SES evaluation assistance to states through an updated issue brief as well as technical assistance provided by the Comprehensive Center on Innovation and Improvement and at a conference this fall. We are sending copies of this report to appropriate congressional committees, the Secretary of Education, and other interested parties. Copies will also be made available upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about the report, please contact me at (202) 512-7215. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. To obtain nationally representative information on supplemental educational services (SES) participation, state and local implementation, and federal oversight, we conducted a Web-based survey of state SES coordinators and a mail survey of district SES coordinators from a nationally representative sample of districts with schools required to offer SES. We also conducted site visits during which we interviewed state, district, and school officials representing four school districts, and we conducted interviews with 22 SES providers both during the site visits and separately. In addition, we spoke with staff at Education involved in SES oversight and implementation and reviewed Education’s data on SES. We conducted our work from August 2005 through July 2006 in accordance with generally accepted government auditing standards. To better understand state SES implementation, particularly how states are monitoring and evaluating SES, we designed and administered a Web- based survey of state SES coordinators in all 50 states, the District of Columbia, and Puerto Rico. The survey was conducted between January and March 2006 with 100 percent of state SES coordinators responding. The survey included questions about student participation in SES, actions taken to increase participation, SES funding and expenditures, methods used to monitor and evaluate implementation, implementation challenges, and assistance received from Education. Because this was not a sample survey, there are no sampling errors. However, the practical difficulties of conducting any survey may introduce nonsampling errors, such as variations in how respondents interpret questions and their willingness to offer accurate responses. We took steps to minimize nonsampling errors, including pre-testing draft instruments and using a Web-based administration system. Specifically, during survey development, we pre-tested draft instruments with officials in Oregon, Maryland, and Washington between October and November 2005. In the pre-tests, we were generally interested in the clarity of the questions and the flow and layout of the survey. For example, we wanted to ensure definitions used in the surveys were clear and known to the respondents, categories provided in closed-ended questions were complete and exclusive, and the ordering of survey sections and the questions within each section was appropriate. On the basis of the pre-tests, the Web instrument underwent some slight revisions. A second step we took to minimize nonsampling errors was using a Web-based survey. By allowing respondents to enter their responses directly into an electronic instrument, this method automatically created a record for each respondent in a data file and eliminated the need for and the errors (and costs) associated with a manual data entry process. To further minimize errors, programs used to analyze the survey data and make estimations were independently verified to ensure the accuracy of this work. While we did not fully validate specific information that states reported through our survey, we took several steps to ensure that the information was sufficiently reliable for the purposes of this report. For example, after the survey was closed, we made comparisons between select items from our survey data and other national-level data sets. We found our survey data were reasonably consistent with the other data sets. On the basis of our checks, we believe our survey data are sufficient for the purposes of our work. To obtain national-level information on district implementation of SES, we administered a mail survey to a nationally representative sample of districts that had schools required to offer SES in school year 2004-2005. The survey was conducted between January and March 2006. To obtain the maximum number of responses to our survey, we sent a reminder postcard to nonrespondents approximately 1 week after the initial mailing of the survey instrument, a follow-up mailing with the full survey instrument to nonrespondents approximately 3 weeks after the initial mailing, and a second follow-up mailing with the full survey instrument approximately 4 weeks later. The survey included questions about student participation in SES, characteristics of students receiving SES, actions taken to increase participation, SES funding and expenditures, methods used to monitor and evaluate implementation, implementation challenges, and assistance received and still needed. The target population of 1,095 districts consisted of public school districts with at least one school in each of their jurisdictions required to provide SES in the 2004-2005 school year. To define our population, we collected school improvement information from state education agency Web sites and the NCLBA Consolidated State Performance Reports: Part I for Reporting on School Year 2003-2004 that states submitted to Education. When available, we checked both sources for school improvement information and used the source that provided the most recently updated data, as this data is often updated by states over the course of the school year. After constructing our population of districts, we used Education’s Common Core of Data Local Education Agency (School District) preliminary file and the Public Elementary/Secondary School preliminary file for the 2003-2004 school year to further define the characteristics of our population. On the basis of our review of these data, we determined these sources to be adequate for the purposes of our work. The sample design for the mail survey was a stratified random sample of districts with one certainty stratum containing all of the districts with 100,000 students or more and one stratum containing all other districts in the universe. We included the 21 districts with 100,000 or more students with certainty in the sample to ensure we gathered information from the largest districts required to offer SES. We selected a simple random sample of districts in the non-certainty stratum and calculated the sample size to achieve a precision of plus and minus 7 percent at the 95 percent confidence level for an expected proportion of 50 percent. To ensure the sample sizes were adequate, we increased the sample size assuming we would obtain a 70 percent response rate. The total sample size for this stratum was 237 districts. In the sample, each district in the population had a known, nonzero probability of being selected. Each selected district was subsequently weighted in the analysis to account statistically for all the schools in the population, including those that were not selected. Table 8 provides a description of the universe and sample of districts. Because we surveyed a sample of districts, our results are estimates of a population of districts and thus are subject to sampling errors that are associated with samples of this size and type. Our confidence in the precisions of the results from this sample is expressed in 95 percent confidence intervals, which are expected to include the actual results in 95 percent of the samples of this type. We calculated confidence intervals for this sample based on methods that are appropriate for a stratified random sample. We determined that 10 of the sampled districts were out of scope because they did not have any schools required to provide SES in the 2004-2005 school year. All estimates produced from the sample and presented in this report are for the estimated target population of 1,034 districts with at least one school required to provide SES in the 2004-2005 school year. All percentage estimates included in this report have margins of error of plus or minus 8 percentage points or less, except for those shown in table 9. All other numerical estimates, such as the total number of schools required to offer SES in 2004-2005, included in this report have margins of error of plus or minus 18 percent or less. We took steps to minimize nonsampling errors that are not accounted for through statistical tests, like sampling errors. In developing the mail survey, we conducted several pretests of draft instruments. We pretested the survey instrument with district officials in Woodburn, Ore.; Tacoma, Wash.; Baltimore, Md.; and Alexandria, Va., between October and November 2005. These pre-tests were similar to the state Web survey pre- tests in design and content. On the basis of the pre-tests, the draft survey instrument underwent some slight revisions. While we did not fully validate specific information that districts reported through our survey, we took several steps to ensure that the information was sufficiently reliable for the purposes of this report. For example, data from the surveys were double-keyed to ensure data entry accuracy, and the information was analyzed using statistical software. After the survey was closed, we also made comparisons between select items from our survey data and other national-level data sets. We found our survey data were reasonably consistent with the external sources. On the basis of our checks, we believe our survey data are sufficient for the purposes of our work. We received survey responses from 73 percent of the 258 district Title I/SES coordinators in our sample. The response rate, adjusted for the known and estimated districts that were out of scope, was 77 percent. After the survey was closed, we analyzed the survey respondents to determine if there were any differences between the responding districts, the nonresponding districts, and the population. We performed this analysis for three characteristics—total number of students enrolled, total number of special education students, and total number of English language learner students. We determined whether sample-based estimates of these characteristics compared favorably with the known population values. The population value for all of the characteristics we examined fell within the 95 percent confidence intervals for the estimates from the survey respondents. On the basis of the 77 percent response rate and this analysis, we chose to include the survey results in our report and produce sample-based estimates to the population of districts required to provide SES in the 2004-2005 school year. To understand SES implementation at the local level, we conducted site visits to four districts between October 17, 2005, and February 16, 2006. The districts visited included Woodburn School District (Woodburn, Ore.), Hamilton County Schools (Chattanooga, Tenn.), Newark Public Schools (Newark, N.J.), and Chicago Public Schools (Chicago, Ill.). The four districts visited were selected because they had experience implementing SES in their schools and were recommended by stakeholders as having promising parent outreach and/or monitoring practices. When viewed as a group, the districts also provided variation across characteristics such as geographic location, district size, student ethnicity, and the percentage of students with limited English proficiency or disabilities. During the site visits, we interviewed state officials, including the state SES coordinator, and district officials, including the superintendent and SES coordinator. We also interviewed officials representing 12 schools, including principals and other school staff involved with SES. In total, we visited several schools of each level, from elementary to high, and though district officials selected the schools we visited, all of the schools had experience implementing SES. Through our interviews with state, district, and school officials, we collected information on district efforts to notify parents and fulfill implementation responsibilities, student participation, providers, local implementation challenges, and implementation assistance received and needed. During the visits, we also interviewed providers and observed tutoring sessions in order to better understand implementation. During our visit to Woodburn, Ore., we also observed a provider fair. In addition to our site visits to four districts, we also visited the Rhode Island Department of Elementary and Secondary Education in March 2006 to gather additional information on state efforts to monitor and evaluate SES. Rhode Island invited us to attend two meetings the state held with districts implementing SES and providers serving students in Rhode Island, during which SES challenges, ways to improve implementation, and state efforts to evaluate providers were discussed. In total, we conducted interviews with 22 providers, including 15 providers during the site visits and 7 providers operating in multiple states. The Education Industry Association assisted our efforts to contact multi-state providers, and most of the multi-state providers we interviewed were association members. Multi-state provider interviews were conducted between November and December 2005. Through all of our provider interviews, we collected information on provider efforts to increase participation in SES, align services with state standards and district curriculum, and communicate with parents and schools to ensure students are receiving needed services. We also collected information on students served, tutor and program characteristics, and provider challenges to SES implementation. While the providers we interviewed reflect some variety in provider characteristics, our selections were not intended to be representative. Thus, the findings from our interviews cannot be used to make inferences about all providers. We analyzed state data submitted to Education through the NCLBA Consolidated State Performance Reports (CSPR) for school years 2002- 2003, 2003-2004, and 2004-2005. State reports from all 3 years included the number of students receiving SES and the number of schools those students attended, and state reports from 2003-2004 and 2004-2005 also included the number of students eligible for SES. Data from the 2003-2004 CSPRs were used to assist our analysis of SES participation. To assess the reliability of the 2003-2004 data, we performed a series of tests, which included checking to ensure that data were consistent, that subtotals added to overall totals and that data provided for 1 year bore a reasonable relationship to the next year’s data and to data reported elsewhere, including state education reports. We also spoke with Education officials about their follow-up efforts to verify the data. At the time of our review, Education was in the process of completing efforts to verify the 2003-2004 data. While we compared the 2004-2005 CSPR data to data obtained through our state and district surveys to further verify our data, we generally did not use the 2004-2005 CSPR data for our analysis. During this comparison analysis, where we found discrepancies or sought clarification, we followed up with state officials. In several states, officials revised the numbers that they had initially reported to us or to Education. On the basis of our review of these data, we determined these sources to be adequate for the purposes of our work. We also considered SES-related findings from Education studies, including the Evaluation of Title I Accountability Systems and School Improvement Efforts: Findings From 2002-03 (2005) and the National Assessment of Title I: Interim Report (2006). To ensure the findings from these studies were generally reliable, we reviewed each study’s methodology, including data sources and analyses, limitations, and conclusions. In addition, in designing our state and district surveys, we reviewed SES-related survey questions used by Education in these studies. Cindy Ayers, Assistant Director, and Rachel Frisk, Analyst-in-Charge, managed this assignment and made significant contributions to all aspects of this report. Cathy Roark, Ted Burik, and David Perkins also made significant contributions. Kevin Jackson, Jean McSween, Jim Ashley, and Jerry Sandau provided methodological expertise and assistance; Rachael Valliere assisted with message and report development; and Rasheeda Curry made contributions during study design. In addition, Jessica Botsford assisted in the legal analysis. No Child Left Behind Act: Assistance from Education Could Help States Better Measure Progress of Students with Limited English Proficiency. GAO-06-815. Washington, D.C.: July 26, 2006. No Child Left Behind Act: States Face Challenges Measuring Academic Growth that Education’s Initiatives May Help Address. GAO-06-661. Washington, D.C.: July 17, 2006. No Child Left Behind Act: Most Students with Disabilities Participated in Statewide Assessments, but Inclusion Options Could Be Improved. GAO-05-618. Washington, D.C.: July 20, 2005. No Child Left Behind Act: Education Needs to Provide Additional Technical Assistance and Conduct Implementation Studies for School Choice Provision. GAO-05-7. Washington, D.C.: December 10, 2004. No Child Left Behind Act: Improvements Needed in Education’s Process for Tracking States’ Implementation of Key Provisions. GAO-04-734. Washington, D.C.: September 30, 2004. No Child Left Behind Act: Additional Assistance and Research on Effective Strategies Would Help Small Rural Districts. GAO-04-909. Washington, D.C.: September 23, 2004. Disadvantaged Students: Fiscal Oversight of Title I Could Be Improved. GAO-03-377. Washington, D.C.: February 28, 2003. Title I Funding: Poor Children Benefit Though Funding Per Poor Child Differs. GAO-02-242. Washington, D.C.: January 31, 2002. | The No Child Left Behind Act of 2001 (NCLBA) requires districts with schools that have not met state performance goals for 3 consecutive years to offer their low-income students supplemental educational services (SES), such as tutoring, if these schools receive Title I funds. SES are provided outside of the regular school day by a state-approved provider, with responsibility for implementation shared by states and districts. GAO examined (1) how SES participation changed between school years 2003-2004 and 2004-2005; (2) how SES providers are working with districts to deliver SES; (3) how states are monitoring and evaluating SES; and (4) how the Department of Education (Education) monitors and supports state implementation of SES. To collect data on SES, GAO surveyed all states and a nationally representative sample of districts with schools required to offer SES. We also visited 4 school districts, interviewed 22 SES providers, reviewed SES-related research, and interviewed Education staff. SES participation among eligible students increased from 12 to 19 percent between school years 2003-2004 and 2004-2005, and the number of recipients also increased, due in part to a rise in the number of schools required to offer services. Districts have used some promising practices to inform parents and encourage participation, such as offering services on school campuses and at various times. However, challenges remain, including timely and effective notification of parents and attracting providers to serve certain areas and students, such as rural districts or students with disabilities. To promote improved student academic achievement, SES providers took steps to align their curriculum with district instruction and communicate with teachers and parents, though the extent of their efforts varied. A majority of the 22 providers we interviewed worked to align SES and district curriculum by hiring teachers familiar with the district curriculum as tutors. However, at least some providers did not have any contact with teachers in about 40 percent of districts. Both providers and district officials experienced challenges related to contracting and coordination of service delivery. Providers, districts, and schools reported that greater involvement of schools would improve SES delivery and coordination, as it has in some places where this is occurring. While state monitoring of district and provider efforts to implement SES has been limited in past years, more states reported conducting on-site reviews and other monitoring activities during 2005-2006. In addition, districts have taken a direct role in monitoring providers, and their monitoring efforts have similarly increased. Although states are required to withdraw approval from providers that fail to increase student academic achievement for 2 years, many states struggle to develop meaningful SES evaluations. While a few states have completed evaluations, none provides a conclusive assessment of SES providers' effect on student academic achievement. Several Education offices monitor SES activity across the country and provide SES support to states and districts through written guidance, grants, and technical assistance. However, states and districts reported needing additional SES evaluation support and technical assistance. For example, 85 percent of states reported needing assistance with methods for evaluating SES. Many also voiced interest in Education's pilot programs that increase SES flexibility, including the one that allowed certain districts identified as in need of improvement to act as providers. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Hepatitis C was first recognized as a unique disease in 1989. It is the most common chronic blood-borne infection in the United States and is a leading cause of chronic liver disease. The virus causes a chronic infection in 85 percent of cases. Hepatitis C, which is the leading indication for liver transplantation, can lead to liver cancer, cirrhosis (scarring of the liver), or end-stage liver disease. Most people infected with hepatitis C are relatively free of physical symptoms. While hepatitis C antibodies generally appear in the blood within 3 months of infection, it can take 15 years or longer for the infection to develop into cirrhosis. Blood tests to detect the hepatitis C antibody, which became available in 1992, have helped to virtually eliminate the risk of infection through blood transfusions and have helped curb the spread of the virus. Many individuals were already infected, however, and because many of them have no symptoms, they are unaware of their infection. Hepatitis C continues to be spread through blood exposure, such as inadvertent needle-stick injuries in health care workers and through the sharing of needles by intravenous drug abusers. Early detection of hepatitis C is important because undiagnosed persons miss opportunities to safeguard their health by unknowingly behaving in ways that could speed the progression of the disease. For example, alcohol use can hasten the onset of cirrhosis and liver failure in those infected with the hepatitis C virus. In addition, persons carrying the virus pose a public health threat because they can infect others. The Centers for Disease Control and Prevention estimates that nearly 4 million Americans are infected with the hepatitis C virus. Approximately 30,000 new infections occur annually. The prevalence of hepatitis C infection among veterans is unknown, but limited survey data suggest that hepatitis C has a higher prevalence among veterans who are currently using VA’s health care system than among the general population because of veterans’ higher frequency of risk factors. A 6 year study—1992–1998— of veterans who received health care at the VA Palo Alto Health Care System in Northern California reported that hepatitis C infection was much more common among veterans within a very narrow age distribution—41 to 60 years of age—and intravenous drug use was the major risk factor. VA began a national study of the prevalence of hepatitis C in the veteran population in October 2001. Data collection for the study has been completed but results have not been approved for release. The prevalence of hepatitis C among veterans could have a significant impact on current and future VA health care resources, because hepatitis C accounts for over half of the liver transplants needed by VA patientscosting as much as $140,000 per transplantand the drug therapy to treat hepatitis C is costlyabout $13,000 for a 48-week treatment regimen. In the last few years, considerable research has been done concerning hepatitis C. The National Institutes of Health (NIH) held a consensus development conference on hepatitis C in 1997 to assess the methods used to diagnose, treat, and manage hepatitis C infections. In June 2002, NIH convened a second hepatitis C consensus development conference to review developments in management and treatment of the disease and identify directions for future research. This second panel concluded that substantial advances had been made in the effectiveness of drug therapy for chronic hepatitis C infection. VA’s Public Health Strategic Healthcare Group is responsible for VA’s hepatitis C program, which mandates universal screening of veterans to identify at-risk veterans when they visit VA facilities for routine medical care and testing of those with identified risk factors, or those who simply want to be tested. VA has developed guidelines intended to assist health care providers who screen, test, and counsel veterans for hepatitis C. Providers are to educate veterans about their risk of acquiring hepatitis C, notify veterans of hepatitis C test results, counsel those infected with the virus, help facilitate behavior changes to reduce veterans’ risk of transmitting hepatitis C, and recommend a course of action. In January 2003, we reported that VA medical facilities varied considerably in the time that veterans must wait before physician specialists evaluate their medical conditions concerning hepatitis C treatment recommendations. To assess the effectiveness of VA’s implementation of its universal screening and testing policy, VA included performance measures in the fiscal year 2002 network performance plan. Network performance measures are used by VA to hold managers accountable for the quality of health care provided to veterans. For fiscal year 2002, the national goal for testing veterans identified as at risk for hepatitis C was established at 55 percent based on preliminary performance results obtained by VA. To measure compliance with the hepatitis C performance measures, VA uses data collected monthly through its External Peer Review Program, a performance measurement process under which medical record reviewers collect data from a sample of veterans’ computerized medical records. Development of VA’s computerized medical record began in the mid-1990s when VA integrated a set of clinical applications that work together to provide clinicians with comprehensive medical information about the veterans they treat. Clinical information is readily accessible to health care providers at the point of care because the veteran’s medical record is always available in VA’s computer system. All VA medical facilities have computerized medical record systems. Clinical reminders are electronic alerts in veterans’ computerized medical records that remind providers to address specific health issues. For example, a clinical reminder would alert the provider that a veteran needs to be screened for certain types of cancer or other disease risk factors, such as hepatitis C. In July 2000, VA required the installation of hepatitis C clinical reminder software in the computerized medical record at all facilities. This reminder alerted providers when they opened a veteran’s computerized medical record that the veteran needed to be screened for hepatitis C. In fiscal year 2002, VA required medical facilities to install an enhanced version of the July 2000 clinical reminder. The enhanced version alerts the provider to at-risk veterans who need hepatitis C testing, is linked directly to the entry of laboratory orders for the test, and is satisfied once the hepatitis C test is ordered. Even though VA’s fiscal year 2002 performance measurement results show that it tested 62 percent of veterans identified to be at risk for hepatitis C, exceeding its national goal of 55 percent, thousands of veterans in the sample who were identified as at risk were not tested. Moreover, the percentage of veterans identified as at risk who were tested varied widely among VA’s 21 health care networks. Specifically, we found that VA identified in its performance measurement sample 8,501 veterans nationwide who had hepatitis C risk factors out of a sample of 40,489 veterans visiting VA medical facilities during fiscal year 2002. VA determined that tests were completed, in fiscal year 2002 or earlier, for 62 percent of the 8,501 veterans based on a review of each veteran’s medical record through its performance measurement process. For the remaining 38 percent (3,269 veterans), VA did not complete hepatitis C tests when the veterans visited VA facilities. The percentage of identified at-risk veterans tested for hepatitis C ranged, as table 1 shows, from 45 to 80 percent for individual networks. Fourteen of VA’s 21 health care networks exceeded VA’s national testing performance goal of 55 percent, with 7 networks exceeding VA’s national testing performance level of 62 percent. The remaining 7 networks that did not meet VA’s national performance goal tested from 45 percent to 54 percent of at-risk veterans. VA’s fiscal year 2002 testing rate for veterans identified as at risk for hepatitis C reflects tests performed in fiscal year 2002 and in prior fiscal years. Thus, a veteran who was identified as at risk and tested for hepatitis C in fiscal year 1998 and whose medical record was reviewed as part of the fiscal year 2002 sample would be counted as tested in VA’s fiscal year 2002 performance measurement result. As a result of using this cumulative measurement, VA’s fiscal year 2002 performance result for testing at-risk veterans who visited VA facilities in fiscal year 2002 and need hepatitis C tests is unknown. To determine if the testing rate is improving for veterans needing hepatitis C tests when they were seen at VA in fiscal year 2002, VA would also need to look at a subset of the sample of veterans currently included in its performance measure. For example, when we excluded veterans from the sample who were tested for hepatitis C prior to fiscal year 2002, and included in the performance measurement sample only those veterans who were seen by VA in fiscal year 2002 and needed to be tested for hepatitis C, we found Network 5 tested 38 percent of these veterans as compared to Network 5’s cumulative performance measurement result of 60 percent. We identified three factors that impeded the process used by our case study network, VA’s Network 5 (Baltimore), for testing veterans identified as at risk for hepatitis C. The factors were tests not being ordered by the provider, ordered tests not being completed, and providers being unaware that needed tests had not been ordered or completed. More than two- thirds of the time, veterans identified as at risk were not tested because providers did not order the test, a crucial step in the process. The remainder of these untested veterans had tests ordered by providers, but the actual laboratory testing process was not completed. Moreover, veterans in need of hepatitis C testing had not been tested because providers did not always recognize during subsequent clinic visits that the hepatitis C testing process had not been completed. These factors are similar to those we identified and reported in our testimony in June 2001. Primary care providers and clinicians in Network 5’s three facilities offered two reasons that hepatitis C tests were not ordered for over two- thirds of the veterans identified as at risk but not tested for hepatitis C in the Network 5 fiscal year 2002 performance measurement sample. First, facilities lacked a method for clear communication between nurses who identified veterans’ risk factors and providers who ordered hepatitis C tests. For example, in two facilities, nurses identified veterans’ need for testing but providers were not alerted through a reminder in the computerized medical record to order a hepatitis C test. In one of these facilities, because nursing staff were at times delayed in entering a note in the computerized medical record after screening a veteran for hepatitis C risk factors, the provider was unaware of the need to order a test for a veteran identified as at risk. The three network facilities have changed their practices for ordering tests, and as of late 2002, nursing staff in each of the facilities are ordering hepatitis C tests for at-risk veterans. The second reason for tests not being ordered, which was offered by a clinician in another one of the three Network 5 facilities, was that nursing staff did not properly complete the ordering procedure in the computer. Although nurses identified at-risk veterans using the hepatitis C screening clinical reminder in the medical record, they sometimes overlooked the chance the reminder gave them to place a test order. To correct this, nursing staff were retrained on the proper use of the reminder. For the remaining 30 percent of untested veterans in Network 5, tests were not completed for veterans who visited laboratories to have blood drawn after hepatitis C tests were ordered. One reason that laboratory staff did not obtain blood samples for tests was because more than two-thirds of the veterans’ test orders had expired by the time they visited the laboratory. VA medical facilities consider an ordered test to be expired or inactive if the veteran’s visit to the laboratory falls outside the number of days designated by the facility. For example, at two Network 5 facilities, laboratory staff considered a test order to be expired or inactive if the date of the order was more than 30 days before or after the veteran visited the laboratory. If the veteran’s hepatitis C test was ordered and the veteran visited the laboratory to have the test completed 31 days later, the test would not be completed because the order would have exceeded the 30- day period and would have expired. Providers can also select future dates as effective dates. If the provider had designated a future date for the order and the veteran visited the laboratory within 30 days of that future date, the order would be considered active. Another reason for incomplete tests was that laboratory staff overlooked some active test orders when veterans visited the laboratory. VA facility officials told us that laboratory staff could miss test orders, given the many test orders some veterans have in their computerized medical records. The computer package used by laboratory staff to identify active test orders differs from the computer package used by providers to order tests. The laboratory package does not allow staff to easily identify all active test orders for a specific veteran by creating a summary of active test orders. According to a laboratory supervisor at one facility, the process for identifying active test orders is cumbersome because staff must scroll back and forth through a list of orders to find active laboratory test orders. Further complicating the identification of active orders for laboratory staff, veterans may have multiple laboratory test orders submitted on different dates from several providers. As a result, when the veteran visits the laboratory to have tests completed, instead of having a summary of active test orders, staff must scroll through a daily list of ordered testsin two facilities up to 60 days of ordersto identify the laboratory tests that need to be completed. Network and facility officials are aware of, but have not successfully addressed, this problem. VA plans to upgrade the computer package used by laboratory staff during fiscal year 2005. Hepatitis C tests that were not ordered or completed sometimes went undetected for long periods in Network 5, even though veterans often made multiple visits to primary care providers after their hepatitis C risk factors were identified. Our review of medical records showed that nearly two-thirds of the at-risk veterans in Network 5’s performance measurement sample who did not have ordered or completed hepatitis C tests had risk factors identified primarily in fiscal years 2002 and 2001. All veterans identified as at risk but who did not have hepatitis C test orders visited VA primary care providers at least once after having a risk factor identified during a previous primary care visit, including nearly 70 percent who visited more than three times. Further, almost all of the at- risk veterans who had hepatitis C tests ordered but not completed returned for follow-up visits for medical care. Even when the first follow- up visits were made to the same providers who originally identified these veterans as being at risk for hepatitis C, providers did not recognize that hepatitis C tests had not been ordered or completed. Providers did not follow up by checking for hepatitis C test results in the computerized medical records of these veterans. Most of these veterans subsequently visited the laboratory to have blood drawn for other tests and, therefore, could have had the hepatitis C test completed if the providers had recognized that test results were not available and reordered the hepatitis C tests. Steps intended to improve the testing rate of veterans identified as at risk for hepatitis C have been taken in three of VA’s 21 health care networks. VA network and facility officials in the three networks we reviewed— Network 5 (Baltimore), Network 2 (Albany), and Network 9 (Nashville)— identified similar factors that impede hepatitis C testing and most often focused on getting tests ordered immediately following risk factor identification. Officials in two networks modified VA’s required hepatitis C testing clinical reminder, which is satisfied when a hepatitis C test is ordered, to continue to alert the provider until a hepatitis C test result is in the medical record. Officials at two facilitiesone in Network 5 and the other in Network 9created a safety net for veterans at risk for hepatitis C who remain untested by developing a method that looks back through computerized medical records to identify these veterans. The method has been adopted in all six facilities in Network 9; the other two facilities in Network 5 have not adopted it. VA network and facility managers in two networks we reviewed Networks 2 and 9instituted networkwide changes intended to improve the ordering of hepatitis C tests for veterans identified as at risk. Facility officials recognized that VA’s enhanced clinical reminder that facilities were required to install by the end of fiscal year 2002 only alerted providers to veterans without ordered hepatitis C tests and did not alert providers to veterans with ordered but incomplete tests. These two networks independently changed this reminder to improve compliance with the testing of veterans at risk for hepatitis C. In both networks, the clinical reminder was modified to continue to alert the provider, even after a hepatitis C test was ordered. Thus, if the laboratory has not completed the order, the reminder is intended to act as a backup system to alert the provider that a hepatitis C test still needs to be completed. Providers continue to receive alerts until a hepatitis C test result is placed in the medical record, ensuring that providers are aware that a hepatitis C test might need to be reordered. The new clinical reminder was implemented in Network 2 in January 2002, and Network 9 piloted the reminder at one facility and then implemented it in all six network facilities in November 2002. Officials at two facilities in our review searched all records in their facilities’ computerized medical record systems and found several thousand untested veterans identified as at risk for hepatitis C. The process, referred to as a “look back,” involves searching all medical records to identify veterans who have risk factors for hepatitis C but have not been tested either because the providers did not order the tests or ordered tests were not completed. The look back serves as a safety net for these veterans. The network or facility can perform the look back with any chosen frequency and over any period of time. The population searched in a look back includes all veteran users of the VA facility and is more inclusive than the population that is sampled monthly in VA’s performance measurement process. As a result of a look back, one facility manager in Network 5 identified 2,000 veterans who had hepatitis C risk factors identified since January 2001 but had not been tested as of August 2002. Facility staff began contacting the identified veterans in October 2002 to offer them the opportunity to be tested. Although officials in the other two Network 5 facilities have the technical capability to identify and contact all untested veterans determined to be at risk for hepatitis C, they have not done so. An official at one facility not currently conducting look back searches stated that the facility would need support from those with computer expertise to conduct a look back search. A facility manager in Network 9 identified, through a look back, more than 1,500 veterans who had identified risk factors for hepatitis C but were not tested from January 2001 to September 2002. The manager in this facility began identifying untested, at-risk veterans in late March 2003 and providers subsequently began contacting these veterans to arrange testing opportunities. Other Network 9 facility managers have also begun to identify untested, at-risk veterans. Given that two facilities in our review have identified over 3,000 at-risk veterans in need of testing through look back searches, it is likely that similar situations exist at other VA facilities. Although VA met its goal for fiscal year 2002, thousands of veterans at risk for hepatitis C remained untested. Problems persisted with obtaining and completing hepatitis C test orders. As a result, many veterans identified as at risk did not know if they have hepatitis C. These undiagnosed veterans risk unknowingly transmitting the disease as well as potentially developing complications resulting from delayed treatment. Some networks and facilities have upgraded VA’s required hepatitis C clinical reminder to continue to alert providers until a hepatitis C test result is present in the medical record. Such a system appears to have merit, but neither the networks nor VA has evaluated its effectiveness. Network and facility managers would benefit from knowing, in addition to the cumulative results, current fiscal year performance results for hepatitis C testing to determine the effectiveness of actions taken to improve hepatitis C testing rates. Some facilities have compensated for weaknesses in hepatitis C test ordering and completion processes by conducting look backs through computerized medical record systems to identify all at-risk veterans in need of testing. If all facilities were to conduct look back searches, potentially thousands more untested, at-risk veterans would be identified. To improve VA’s testing of veterans identified as at risk of hepatitis C infection, we recommend that the Secretary of Veterans Affairs direct the Under Secretary for Health to determine the effectiveness of actions taken by networks and facilities to improve the hepatitis C testing rates for veterans and, where actions have been successful, consider applying these improvements systemwide and provide local managers with information on current fiscal year performance results using a subset of the performance measurement sample of veterans in order for them to determine the effectiveness of actions taken to improve hepatitis C testing processes. In commenting on a draft of this report VA concurred with our recommendations. VA said its agreement with the report’s findings was somewhat qualified because it was based on fiscal year 2002 performance measurement results. VA stated that the use of fiscal year 2002 results does not accurately reflect the significant improvement in VA’s hepatitis C testing performanceup from 62 percent in fiscal year 2002 to 86 percent in fiscal year 2003, results that became available recently. VA, however, did not include its fiscal year 2003 hepatitis C testing performance results by individual network, and as a result, we do not know if the wide variation in network results, which we found in fiscal year 2002, still exists in fiscal year 2003. We incorporated updated performance information provided by VA where appropriate. VA did report that it has, as part of its fiscal year 2003 hepatitis C performance measurement system, provided local facility managers with a tool to assess real-time performance in addition to cumulative performance. Because this tool was not available at the time we conducted our audit work, we were unable to assess its effectiveness. VA’s written comments are reprinted in appendix II. We are sending copies of this report to the Secretary of Veterans Affairs and other interested parties. We also will make copies available to others upon request. In addition, the report is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-7101. Another contact and key contributors are listed in appendix III. To follow up on the Department of Veterans Affairs’ (VA) implementation of performance measures for hepatitis C we (1) reviewed VA’s fiscal year 2002 performance measurement results of testing veterans it identified as at risk for hepatitis C, (2) identified factors that impede VA’s efforts to test veterans for hepatitis C in one VA health care network, and (3) identified actions taken by VA networks and medical facilities intended to improve the testing rate of veterans identified as at risk for hepatitis C. We reviewed VA’s fiscal year 2002 hepatitis C testing performance results, the most recently available data at the time we conducted our work, for a sample of 8,501 veterans identified as at risk and compared VA’s national and network results for fiscal year 2002 against VA’s performance goal for hepatitis C testing. The sample of veterans identified as at risk for hepatitis C was selected from VA’s performance measurement process—also referred to as the External Peer Review Process—that is based on data abstracted from medical records by a contractor. In addition, we looked at one VA health care network’s testing rate for at-risk veterans visiting its clinics in fiscal year 2002. To test the reliability of VA’s hepatitis C performance measurement data, we reviewed 288 medical records in Network 5 (Baltimore) and compared the results against the contractor’s results for the same medical records and found that VA’s data were sufficiently reliable for our purposes. To augment our understanding of VA’s performance measurement process for hepatitis C testing, we reviewed VA documents and interviewed officials in VA’s Office of Quality and Performance and Public Health Strategic Health Care Group. To identify the factors that impede VA’s efforts to test veterans for hepatitis C, we conducted a case study of the three medical facilities located in VA’s Network 5Martinsburg, West Virginia; Washington, D.C.; and the VA Maryland Health Care System. We chose Network 5 for our case study because its hepatitis C testing performance, at 60 percent, was comparable to VA’s national performance of 62 percent. As part of the case study of Network 5, we reviewed medical records for all 288 veterans identified as at risk for hepatitis C who were included in that network’s sample for VA’s fiscal year 2002 performance measurement process. Of the 288 veterans identified as at risk who needed hepatitis C testing, VA’s performance results found that 115 veterans in VA’s Network 5 were untested. We reviewed the medical records for these 115 veterans and found hepatitis C testing results or indications that the veterans refused testing in 21 cases. Eleven veterans had hepatitis C tests performed subsequent to VA’s fiscal year 2002 performance measurement data collection. Hepatitis C test results or test refusals for 10 veterans were overlooked during VA’s data collection. As such, we consider hepatitis C testing opportunities to have been missed for 94 veterans. On the basis of our medical record review, we determined if the provider ordered a hepatitis C test and, if the test was ordered, why the test was not completed. For example, if a hepatitis C test had been ordered but a test result was not available in the computerized medical record, we determined whether the veteran visited the laboratory after the test was ordered. If the veteran had visited the laboratory, we determined if the test order was active at the time of the visit and was overlooked by laboratory staff. Based on interviews with providers, we identified the reason why hepatitis C tests were not ordered. We also analyzed medical records to determine how many times veterans with identified risk factors and no hepatitis C test orders returned for primary care visits. To determine actions taken by networks and medical facilities intended to improve the testing rate of veterans identified as at risk for hepatitis C, we expanded our review beyond Network 5 to include Network 2 and Network 9. We reviewed network and facility documents and conducted interviews with network quality managers and medical facility staff— primary care providers, nurses, quality managers, laboratory chiefs and supervisors, and information management staff. Our review was conducted from April 2002 through November 2003 in accordance with generally accepted government auditing standards. In addition to the contact named above, Carl S. Barden, Irene J. Barnett, Martha A. Fisher, Daniel M. Montinez, and Paul R. Reynolds made key contributions to this report. VA Health Care: Improvements Needed in Hepatitis C Disease Management Practices. GAO-03-136. Washington, D.C.: January 31, 2003. Major Management Challenges and Program Risks: Department of Veterans Affairs. GAO-03-110. Washington, D.C.: January 2003. Veterans’ Health Care: Standards and Accountability Could Improve Hepatitis C Screening and Testing Performance. GAO-01-807T. Washington, D.C.: June 14, 2001. Veterans’ Health Care: Observations on VA’s Assessment of Hepatitis C Budgeting and Funding. GAO-01-661T. Washington, D.C.: April 25, 2001. | Hepatitis C is a chronic disease caused by a blood-borne virus that can lead to potentially fatal liverrelated conditions. In 2001, GAO reported that the VA missed opportunities to test about 50 percent of veterans identified as at risk for hepatitis C. GAO was asked to (1) review VA's fiscal year 2002 performance measurement results in testing veterans at risk for hepatitis C, (2) identify factors that impede VA's efforts to test veterans for hepatitis C, and (3) identify actions taken by VA networks and medical facilities to improve the testing rate of veterans at risk for hepatitis C. GAO reviewed VA's fiscal year 2002 hepatitis C performance results and compared them against VA's national performance goals, interviewed headquarters and field officials in three networks, and conducted a case study in one network. VA's performance measurement result shows that it tested, in fiscal year 2002 or earlier, 5,232 (62 percent) of the 8,501 veterans identified as at risk for hepatitis C in VA's performance measurement sample, exceeding its fiscal year 2002 national goal of 55 percent. Thousands of veterans (about one-third) of those identified as at risk for hepatitis C infection in VA's performance measurement sample were not tested. VA's hepatitis C testing result is a cumulative measure of performance over time and does not only reflect current fiscal year performance. GAO found Network 5 (Baltimore) tested 38 percent of veterans in fiscal year 2002 as compared to Network 5's cumulative performance result of 60 percent. In its case study of Network 5, which was one of the networks to exceed VA's fiscal year 2002 performance goal, GAO identified several factors that impeded the hepatitis C testing process. These factors were tests not being ordered by the provider, ordered tests not being completed, and providers being unaware that needed tests had not been ordered or completed. For more than two-thirds of the veterans identified as at risk but not tested for hepatitis C, the testing process failed because hepatitis C tests were not ordered, mostly due to poor communication between clinicians. For the remaining veterans, the testing process was not completed because orders had expired by the time veterans visited the laboratory or test orders were overlooked because laboratory staff had to scroll back and forth through daily lists, a cumbersome process, to identify active orders. Moreover, during subsequent primary care visits by these untested veterans, providers often did not recognize that hepatitis C tests had not been ordered nor had their results been obtained. Consequently, undiagnosed veterans risk unknowingly transmitting the disease as well as potential complications resulting from delayed treatment. The three networks GAO looked at--5 (Baltimore), 2 (Albany), and 9 (Nashville)--have taken steps intended to improve the testing rate of veterans identified as at risk for hepatitis C. To do this, in two networks officials modified clinical reminders in the computerized medical record to alert providers that for ordered hepatitis C tests, results were unavailable. Officials at two facilities developed a "look back" method to search computerized medical records to identify all at-risk veterans who had not yet been tested and identified approximately 3,500 untested veterans. The look back serves as a safety net for veterans identified as at risk for hepatitis C who have not been tested. The modified clinical reminder and look back method of searching medical records appear promising, but neither the networks nor VA has evaluated their effectiveness. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
PBGC was created by the Employee Retirement Income Security Act of 1974 (ERISA) to pay benefits to participants in private DB plans in the event that an employer could not. PBGC may pay benefits, up to specified limits, if a plan does not have sufficient assets itself to pay promised benefits and the sponsoring company is in financial distress. PBGC’s single-employer insurance program guarantees benefits up to $4,500 per month for age-65 retirees of plans terminating in 2009, with lower guarantees for those who retire before age 65. Currently, PBGC insurance covers 44 million participants, including retirees, in over 29,000 DB plans. PBGC pays monthly retirement benefits to more than 640,000 retirees in 3,860 pension plans that have ended, and is responsible for the current and future pensions of about 1.3 million people. ERISA also requires PBGC to encourage the continuation and maintenance of voluntary private pension plans. PBGC receives no funds from general tax revenues. Operations are financed by insurance premiums set by Congress and paid by sponsors of DB plans, recoveries from the companies formerly responsible for the plans, and investment income of assets from pension plans taken over, or “trusteed,” by PBGC. Under current law, other than statutory authority to borrow up to $100 million from the Treasury Department, no substantial source of funds is available to PBGC if it runs out of money. In the event that PBGC were to exhaust all of its holdings, benefit payments would have to be drastically cut unless Congress were to take action to provide support. The assets and liabilities that PBGC accumulates from trusteeing plans has increased rapidly over the last 6 years or so. This is largely due to the termination, typically through bankruptcies, of a number of very large, underfunded plan sponsors. In fact, 8 of the top 10 firms presenting claims against PBGC did so from 2003 to 2007. These top 10 claims alone currently account for over 60 percent of all of PBGC’s claims and are concentrated among firms representing the steel and airline industries. Overall, these industries accounted for about three-quarters of PBGC’s total claims and single-employer benefit payments in 2007. In 2003, GAO designated PBGC’s single-employer program as high-risk, meaning that the program needs urgent Congressional attention and agency action. We specifically noted PBGC’s prior-year net deficit, as well as the risk of the termination among large, underfunded pension plans, as reasons for the program’s high-risk designation. As part of our monitoring of PBGC as a high-risk agency we have highlighted additional challenges faced by the single-employer program. Among these concerns were the serious weaknesses that existed with respect to plan funding rules and that PBGC’s premium structure and guarantees needed to be re-examined to better reflect the risk posed by various plans. Additionally, the number of single-employer insured DB plans has been rapidly declining, and, among the plans still in operation, many have frozen benefits to some or all participants. Further, the prevalence of plans that are closed to new participants seems to imply that PBGC is likely to see a decline in insured participants, especially as insured participants seem increasingly likely to be retired (as opposed to active or current) workers. PBGC has remained high-risk with each subsequent report in 2005, 2007, and, most recently, 2009. In our 2007 high risk update we noted that major pension legislation had been enacted which addressed many of the concerns articulated in our previous reports and testimonies on PBGC’s financial condition. The Deficit Reduction Act of 2005 (DRA) was signed into law on February 8, 2006 and included provisions to raise flat-rate premiums and create a new, temporary premium for certain terminated single-employer plans. Later that year the Pension Protection Act of 2006 (PPA) was enacted; it included a number of provisions aimed at improving plan funding and PBGC finances. The provisions aimed at improving plan funding included such measures as raising the funding targets DB plans must meet, reducing the period over which sponsors can “smooth” reported plan assets and liabilities, and restricting sponsors’ ability to substitute “credit balances” for cash contributions. Reforms aimed at shoring up PBGC revenues included a termination premium for some bankrupt sponsors, and limiting PBGC’s guarantee to pay certain benefits. However, the overall impact of PPA remains unclear; PPA did not fully close potential plan funding gaps, and provided special relief to plan sponsors in troubled industries. PBGC’s net financial position improved from 2005 to 2006 because some very large plans that were previously classified as probable terminations were reclassified to a reasonably possible designation as a result of the relief granted to troubled industries such as the airlines. While PBGC’s deficit improved for fiscal year 2008, the fiscal year ended just prior to the severe market downturn, and it is likely that their net position looks different today. Since we last reported to Congress on PBGC, PBGC issued its fiscal year 2008 financials and reported that the net deficit for its insurance programs was $11.2 billion. In some ways, this was good news. PBGC’s net deficit reached a peak of $23.5 billion in 2004 largely as a result of a number of realized and probable claims that occurred during that year. However, the lower 2008 deficit may be a product of conditions that no longer exist. For example, PBGC’s net deficit is a resulting difference between its assets and its liabilities. (See figure 1 for the difference between PBGC assets and liabilities for both insurance programs from 1990 to 2008.) As of PBGC’s September 30, 2008 financial statement—even before the severe market downturn in October—PBGC saw an investment return of -6.5 percent over the year, which contributed to diminishing its assets from the prior year by about $5.5 billion. The net deficit improved, despite the performance of its assets, because of the decrease in its liabilities. According to PBGC, the improvement was due largely to successful negotiations in bankruptcy proceedings, a favorable change in interest factors used to value PBGC’s liabilities, and the fact that PBGC saw significant reductions to its liabilities for probable terminations. PBGC has likely seen its net financial condition hurt by increased exposure due to declines in funding levels of many large plans, from the termination of underfunded plans, and by an increase in its liabilities due to a likely decrease in the interest rates used to value its liabilities. The current economic environment has likely increased the exposure PBGC faces from financially distressed sponsors with large, underfunded plans. The funding of many large plans has likely eroded as a result of the lowered financial health of many sponsors, thereby potentially increasing PBGC’s exposure to probable terminations, developments that the most recent estimates may not reflect. Estimating PBGC’s future claims has always been difficult to predict over the long-term due to the significant volatility in plan underfunding and sponsor credit quality over time. However, the current economic environment seems to have put sponsors under particular stress. There is likely a wide range of industry sectors that have been affected by the current economic environment, and particularly the automotive sector. For example, the pension plans of Chrysler and General Motors (GM) today pose considerable financial uncertainty to PBGC. In the event that Chrysler or GM cannot continue to maintain their pension plans—such as in the case of liquidation or an asset sale—PBGC may be required to take responsibility for paying the benefits for the plans, which, as of the most current publicly available information, are underfunded by a total of about $29 billion. Although it is impossible to know what the exact claims to PBGC would be if it took over Chrysler’s and GM’s pension plans, doing so would likely strain PBGC’s resources, because the automakers’ plans represent a significant portion of the benefits it insures. Further, from an administrative standpoint, PBGC would be presented with an unprecedented number of assets to manage as well as benefit liabilities to administer. For example, GM’s and Chrysler’s plans include roughly 900,000 participants, both those receiving benefits now and those who have earned benefits payable in the future, which would increase the total number of PBGC’s current or future beneficiaries by nearly 80 percent. Even with Chrysler’s bankruptcy and concern about GM’s viability, it is not certain that PBGC would take over responsibility for either plan. For example, a number of auto parts suppliers in Chapter 11 with collectively bargained pension plans have emerged from reorganization without terminating their pension plans. While the events surrounding the automakers and their pension plans are clearly an area of concern for the PBGC, the recession has likely affected many industry sectors. Although, PBGC’s past claims have been concentrated to industries like steel and airlines, there is cause for concern that future claims will come from a much broader array of industries. PBGC’s insurance programs held $63 billion in assets as of September 30, 2008, and the Corporation has stated it has sufficient liquidity to meet its obligations for a number of years. However, to the extent additional claims from vulnerable industries markedly increase PBGC’s accumulated deficit and decrease its long-run liquidity, there could be pressure for the federal government to provide PBGC financial assistance to avoid reductions in guaranteed payments to retirees or unsustainable increases in the premium burden on sponsors of ongoing plans. PBGC’s overall exposure has increased for additional reasons. The Worker, Retiree, and Employer Recovery Act of 2008 (WRERA), passed in December, grants funding relief to certain sponsors and delays the implementation of certain aspects of the PPA. WRERA makes several technical corrections to PPA and contains provisions designed to help pension plans and plan participants weather the current economic downturn. For a number of sponsors, this legislation may mean lower plan contributions than they would otherwise have had to pay under the phase- in of PPA and, at least temporarily, potentially increase levels of plan underfunding. As we noted in our 2009 high-risk update on PBGC, this legislation is likely to increase PBGC’s risk exposure, perhaps significantly. Finally, PBGC’s newly-adopted investment policy may expose the Corporation to additional risk. The new policy reduces the proportion of PBGC assets allocated to fixed-income investments, such as Treasury and corporate bonds; increases its proportional holdings in international equities; and introduces new asset classes, such as private equity, emerging market debt and equities, high-yield fixed income, and private real estate. While the investment policy adopted in 2008 aimed to reduce PBGC’s deficit by investing in assets with a greater expected return, in a report last summer, we found that the new allocation will likely carry more risk than acknowledged by PBGC’s analysis. Our assessment found that, although returns are indeed likely to grow with the new allocation, the risks are likely higher as well. Although it is important that the PBGC consider ways to optimize its portfolio, including higher return and diversification strategies, the agency faces unique challenges, such as PBGC’s need for access to cash in the short-term to pay benefits, which could further increase the risks it faces with any investment strategy that allocates significant portions of the portfolio to volatile or illiquid assets. According to PBGC the new allocation will be sufficiently diversified to mitigate the expected risks associated with the higher expected return. PBGC also asserted that it should involve less risk than the previous policy. The Congressional Budget Office has also pointed out such risks, saying that “the new strategy…increases the risk that PBGC will not have sufficient assets to cover retirees’ benefit payments when the economy and financial markets are weak.” PBGC has only implemented portions of the policy. PBGC told us that it has begun the process of reducing the percentage of its assets in fixed- income investments, but it has not yet begun to increase its portfolio of certain asset classes, specifically private equity and real estate. PBGC also told us that the process it follows for its current implementation of the investment policy follows industry best practices for large transactions. However, PBGC officials also told us that the intended asset allocation targets set by the current implementation of this policy could easily be derailed if PBGC is required to assume the assets of very large and severely underfunded sponsors. PBGC’s board has limited time and resources to provide policy direction and oversight. PBGC’s three-member board, established by ERISA, includes only the Secretary of Labor, as the Chair of the Board, and the Secretaries of Commerce and Treasury. We noted that the board members have designated officials and staff within their respective agencies to conduct much of the work on their behalf and relied mostly on PBGC’s management to inform these board members’ representatives of pending issues. PBGC’s board members have numerous other responsibilities in their roles as cabinet secretaries and have been unable to dedicate consistent and comprehensive attention to PBGC. Since PBGC’s inception, the board has met infrequently. In 2003, after several high-profile pension plan terminations, PBGC’s board began meeting twice a year (see figure 2). PBGC officials told us that it is a challenge to find a time when all three cabinet secretaries are able to meet, and in several instances the board members’ representatives officially met in their place. Currently, the PBGC board has not met face-to-face in over one year—since February 2008. While the PBGC board has met more frequently since 2003, very little time is spent on addressing strategic and operational issues. According to corporate governance guidelines, boards should meet regularly and focus principally on broader issues, such as corporate philosophy and mission, broad policy, strategic management, oversight and monitoring of management, and company performance against business plans. However, our review of the board’s recorded minutes found that although some meetings devoted a portion of time to certain strategic and operational issues, such as investment policy, the financial status of PBGC’s insurance programs, and outside audit reviews, the board meetings generally only lasted about an hour. The size and composition of PBGC’s board does not meet corporate governance guidelines. According to corporate governance guidelines published by The Conference Board, corporate boards should be structured so that the composition and skill set of a board is linked to the corporation’s particular challenges and strategic vision, and should include a mix of knowledge and expertise targeted to the needs of the corporation. We did not identify any other government corporations with boards as small as at PBGC. Government corporations’ boards averaged about 7 members, with one having as many as 15. In addition, PBGC is also exposed to challenges as the board, board members’ representatives, and the director have changed with the recent presidential transition, limiting the board’s institutional knowledge of the Corporation. The revision of PBGC’s investment policy provides an example of the need for an active board to help oversee the Corporation’s challenges and strategic vision. We found that PBGC board’s 2004 and 2006 investment policy was not fully implemented. While the board assigned responsibility to PBGC for reducing equity holdings to a range of 15 to 25 percent of total investment, by 2008 the policy goal had not been met. Although the PBGC director and staff kept the board apprised of investment performance and asset allocation, we found no indication that the board had approved the deviation from its established policy or expected PBGC to continue to meet policy objectives. While PBGC’s Board revised the investment policy in February 2008, the board has not held a meeting to discuss the new policy’s implementation even though there has been a serious downturn in investment markets. In May 2009, PBGC officials told us that they have kept the new Board members—the Secretary of Labor, along with officials from the Departments of Commerce and Treasury—apprised of the progress in implementing the new investment policy. In our July 2007 report on PBGC’s governance structure, we asked Congress to consider expanding PBGC’s board of directors, to appoint additional members who possess knowledge and expertise useful to PBGC’s responsibilities and can provide needed attention. Further, dedicating staff that are independent of PBGC’s executive management and have relevant pension and financial expertise to solely support the board’s policy and oversight activities may be warranted. In response to our finding, PBGC contracted with a consulting firm to identify and review governance models and provide a background report to assist the board in its review of alternative corporate governance structures. The consulting firm’s final report describes the advantages and disadvantages of the corporate board structures and governance practices of other government corporations and select private sector companies, and concludes that there are several viable alternatives for PBGC’s governance structure and practices. Although two-thirds of PBGC’s workforce includes contractor employees, PBGC’s strategic planning generally does not recognize contracting as a major aspect of PBGC activities (see figure 3). Since the mid-1980s, PBGC has had contracts covering a wide range of services, including the administration of terminated plans, payment of benefits, customer communication, legal assistance, document management, and information technology. As PBGC’s workload grew due to the significant number of large pension plan terminations, PBGC relied on contractors to supplement its workforce, acknowledging that it has difficulty anticipating workloads due to unpredictable economic conditions. Last summer we reported that PBGC had begun to improve some of its contracting practices by to updating contracting policies and processes, upgrading the skills of Procurement Department staff, and better tracking contracting data. While we reported that PBGC had begun to implement performance-based contracting that offers the potential for better contract outcomes, PBGC officials recently told us that the new field benefit administration contracts will not be performance-based. PBGC lacks a strategic approach to its acquisition and human capital management needs. PBGC’s strategic plan does not document how the acquisition function supports the agency’s missions and goals. Further, although contracting is essential to PBGC’s mission, we found that the Procurement Department is not included in corporate-level strategic planning. Based on these findings, we recommended that PBGC revise its strategic plan to reflect the importance of contracting and to project its vision of future contract use, and ensure that PBGC’s procurement department is included in agency-wide strategic planning. (Appendix I includes selected GAO recommendations on PBGC Governance and Management). PBGC disagreed with our recommendation to reflect the importance of contracting and incorporate its vision for future contractor use in its strategic planning documents, as it believes its recently issued strategic plan is sufficiently comprehensive. However, PBGC’s strategic plan only briefly mentions performance-based contracting, flexible staffing, and metrics for specific contracts, and therefore we believe that it does not reflect the important role contracting is playing in achieving PBGC’s mission. PBGC also needs a more strategic approach for improving human capital management. We found that PBGC’s draft strategic human capital plan does not provide detailed plans for obtaining contract support or managing the workload fluctuations. While PBGC has made progress in its human capital management approach by taking steps to improve its human capital planning and practices—such as drafting a succession management plan—the Corporation lacked a formal, comprehensive human capital strategy, articulated in a formal human capital plan that includes human capital policies, programs, and practices. PBGC is generally able to hire staff in its key occupations—such as accountants, actuaries, and attorneys—and retain them at rates similar to those of the rest of the federal government. However, PBGC has had some difficulty hiring and retaining staff for specific occupations and positions, including executives and senior financial analysts. Since our report, PBGC officials told us that they have provided a human capital plan to the Office of Personnel Management (OPM) and are awaiting OPM feedback. The need for a strategic approach to acquisition and human capital management is essential to ensure that PBGC is able to manage the administrative fluctuations of a pension insurance corporation. As noted earlier, General Motor’s and Chrysler’s plans include roughly 900,000 participants, both those receiving benefits now and those who have earned benefits payable in the future. These participants, if brought under PBGC administration, would raise the number of PBGC’s current or future beneficiary population by roughly 80 percent. While it is uncertain whether an automaker plan would ever be assumed by PBGC, the concentration of large numbers of plan beneficiaries among just two sponsors illustrates the potential for a sudden and unprecedented administrative workload at PBGC. While PBGC has been on our High Risk list since 2003—and many of its challenges are long-term in nature—the recession and market down-turn has magnified the challenges it faces. When we last reported on PBGC’s financial challenges in September, we specifically mentioned the change in investment policy as a key challenge going forward. This is still the case, but even more recent events, such as legislative changes and the plight of the automakers and other financially weak sponsors in other industries, have the potential to expose PBGC to claims of a potentially unprecedented magnitude. While many of the financial challenges are a result of long-term weaknesses that are in many ways structural, PBGC does have some degree of control over challenges it faces with respect to governance, oversight, and management. GAO has made many recommendations in these areas, but given the potentially immense financial challenges the Corporation faces, the need to act is only growing. It is unfortunate that, during a time of financial crisis, the PBGC board has not met in 15 months. However, PBGC not only needs a board that meets regularly, but also a board that can be active and commit the time to understanding the weight and urgency of the issues facing the Corporation. Ideally, a more robust board structure would be in place as soon as possible so that the board can address current challenges and anticipate new ones. The current situation has important implications for all PBGC stakeholders: plan sponsors, insured participants, insured beneficiaries, as well as the government and, ultimately, the taxpayers. PBGC should not have to take on significant, additional claims from severely underfunded pension plans before situation is recognized. Chairman Kohl, Senator Martinez, and Members of the Committee, this concludes my prepared statement. I would be happy to respond to any questions you may have. High Risk Series: An Update. GAO-09-271. Washington, D.C.: January 2009. Pension Benefit Guaranty Corporation: Improvements Needed to Address Financial and Management Challenges. GAO-08-1162T. Washington, D.C.: September 24, 2008. Pension Benefit Guaranty Corporation: Need for Improved Oversight Persists. GAO-08-1062. Washington, D.C.: September 10, 2008. Pension Benefit Guaranty Corporation: Some Steps Have Been Taken to Improve Contracting, but a More Strategic Approach Is Needed. GAO-08-871. Washington, D.C.: August 18, 2008. PBGC Assets: Implementation of New Investment Policy Will Need Stronger Board Oversight. GAO-08-667. Washington, D.C.: July 17, 2008. Pension Benefit Guaranty Corporation: A More Strategic Approach Could Improve Human Capital Management. GAO-08-624. Washington, D.C.: June 12, 2008. High Risk Series: An Update. GAO-07-310. Washington, D.C.: January 2007. Pension Benefit Guaranty Corporation: Governance Structure Needs Improvements to Ensure Policy Direction and Oversight. GAO-07-808 Washington, D.C.: July 6, 2007. PBGC’s Legal Support: Improvement Needed to Eliminate Confusion and Ensure Provision of Consistent Advice. GAO-07-757R. Washington, D.C.: May 18, 2007. Private Pensions: Questions Concerning the Pension Benefit Guaranty Corporation’s Practices Regarding Single-Employer Probable Claims. GAO-05-991R. Washington, D.C.: September 9, 2005. Private Pensions: The Pension Benefit Guaranty Corporation and Long- Term Budgetary Challenges. GAO-05-772T. Washington, D.C.: June 9, 2005. Private Pensions: Recent Experiences of Large Defined Benefit Plans Illustrate Weaknesses in Funding Rules. GAO-05-294. Washington, D.C.: May 31, 2005. Pension Benefit Guaranty Corporation: Single-Employer Pension Insurance Program Faces Significant Long-Term Risks. GAO-04-90. Washington, D.C.: October 29, 2003. Pension Benefit Guaranty Corporation Single-Employer Insurance Program: Long-Term Vulnerabilities Warrant ‘High Risk’ Designation. GAO-03-1050SP. Washington, D.C.: July 23, 2003. Pension Benefit Guaranty Corporation: Statutory Limitation on Administrative Expenses Does Not Provide Meaningful Control. GAO-03-301. Washington, D.C.: February 28, 2003. GAO Forum on Governance and Accountability: Challenges to Restore Public Confidence in U.S. Corporate Governance and Accountability Systems. GAO-03-419SP. Washington, D.C.: January 2003. For further questions about this statement, please contact Barbara D. Bovbjerg at (202) 512-7215. Individuals making key contributions to this statement include Blake Ainsworth, Charles Ford, Jennifer Gregory, Craig Winslow, and Susannah Compton. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Pension Benefit Guaranty Corporation (PBGC) insures the retirement future of nearly 44 million people in over 29,000 private-sector defined benefit pension plans. In July 2003, GAO designated PBGC's single-employer pension insurance program--its largest insurance program--as "high risk," including it on GAO's list of major programs that need urgent Congressional attention and agency action. The program remains on the list today with a financial deficit of just over $11 billion, as of September 2008. The committee asked GAO to discuss our recent work on PBGC. Specifically, this testimony addresses two issues: (1) PBGC's financial vulnerabilities, and (2) the governance, oversight, and management challenges PBGC faces. To address these objectives, we are relying on our prior work assessing PBGC's long-term financial challenges, and several reports that we have published over the last two years on PBGC governance and management. GAO has made a number of recommendations and identified matters for Congressional consideration in these reports, and PBGC is implementing some of these recommendations. No new recommendations are being made as part of this testimony. Financial and economic conditions have deteriorated since we last reported on PBGC's finances. While PBGC's deficit improved for fiscal year 2008, the fiscal year ended just prior to the severe market downturn, and this lower deficit may be a product of conditions that no longer exist. As a result, it is likely that PBGC's net position looks different today. Other recent events have also added to PBGC's financial challenges. These events include: recent legislation that grants funding relief to certain sponsors, developments with PBGC's investment policy, and a concern that a wide array of industry sectors--including the automotive sector--are under financial distress and may expose PBGC to future claims. As a result, the potential for automaker pension plan terminations could dramatically increase not only PBGC's deficit, but also its administrative workload. With mounting financial challenges and the potential for PBGC's workload to dramatically increase, our concerns about PBGC governance and strategic management have become acute, and improvements are needed, now more than ever. PBGC's board has limited time and resources to provide policy direction and oversight. The three-member board includes the Secretary of Labor, as the Chair of the Board, and the Secretaries of Commerce and Treasury. These board members have numerous other responsibilities and are unable to dedicate consistent and comprehensive attention to PBGC. With only 3 members, PBGC's board may not be large enough to include the knowledge needed to direct and oversee PBGC. In fact, the new board members have yet to meet, and there has not been a face-to-face board meeting in the last 15 months. In addition, without an appointed director, PBGC's governance structure is further exposed to challenges. Further, PBGC continues to lack a fully-adopted strategic approach to its acquisition and human capital management needs. Although contract employees comprise two-thirds of PBGC's workforce, PBGC's strategic planning generally does not recognize contracting as a major aspect of PBGC activities. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
US-VISIT is a governmentwide program intended to enhance the security of U.S. citizens and visitors, facilitate legitimate travel and trade, ensure the integrity of the U.S. immigration system, and protect the privacy of our visitors. Its scope includes the pre-entry, entry, status, and exit of hundreds of millions of foreign national travelers who enter and leave the United States at over 300 air, sea, and land POEs, and the provision of new analytical capabilities across the overall process. To achieve its goals, US-VISIT uses biometric information (digital fingerscans and photographs) to verify identity. In many cases, the US-VISIT process begins overseas at U.S. consular offices, which collect biometric information from applicants for visas and check this information against a database of known criminals and suspected terrorists. When a visitor arrives at a POE, the biometric information is used to verify that the visitor is the person who was issued the visa. In addition, at certain sites, visitors are required to confirm their departure by undergoing US-VISIT exit procedures—that is, having their visas or passports scanned and undergoing fingerscanning. The exit confirmation is added to the visitor’s travel records to demonstrate compliance with the terms of admission to the United States. (App. III provides a detailed description of the pre-entry, entry, status, exit, and analysis processes.) collecting, maintaining, and sharing information on certain foreign nationals who enter and exit the United States; identifying foreign nationals who (1) have overstayed or violated the terms of their admission; (2) may be eligible to receive, extend, or adjust their immigration status; or (3) should be apprehended or detained by law enforcement officials; detecting fraudulent travel documents, verifying traveler identity, and determining traveler admissibility through the use of biometrics; and facilitating information sharing and coordination within the immigration and border management community. In July 2003, DHS established a program office with responsibility for managing the acquisition, deployment, operation, and sustainment of the US-VISIT system and its associated supporting people (e.g., Customs and Border Protection (CBP) officers), processes (e.g., entry/exit policies and procedures), and facilities (e.g., inspection booths and lanes), in coordination with its stakeholders (CBP and the Department of State). As of October 2005, about $1.4 billion has been appropriated for the program, and, according to program officials, about $962 million has been obligated. DHS plans to deliver US-VISIT capability in four increments, with Increments 1 through 3 being interim, or temporary, solutions that fulfill legislative mandates to deploy an entry/exit system, and Increment 4 being the implementation of a long-term vision that is to incorporate improved business processes, new technology, and information sharing to create an integrated border management system for the future. In Increments 1 through 3, the program is building interfaces among existing (“legacy”) systems; enhancing the capabilities of these systems; and deploying these capabilities to air, sea, and land POEs. These increments are to be largely acquired and implemented through existing system contracts and task orders. In May 2004, DHS awarded an indefinite-delivery/indefinite-quantity prime contract to Accenture and its partners. According to the contract, the prime contractor will help support the integration and consolidation of processes, functionality, and data, and it will develop a strategy to build on the technology and capabilities already available to produce the strategic solution, while also assisting the program office in leveraging existing systems and contractors in deploying the interim solutions. Increment 1 concentrates on establishing capabilities at air and sea POEs. It is divided into two parts—1 and 1B. Increment 1 (air and sea entry) includes the electronic capture and matching of biographic and biometric information (two digital index fingerscans and a digital photograph) for selected foreign nationals, including those from visa waiver countries. Increment 1 was deployed on January 5, 2004, for individuals requiring a nonimmigrant visa to enter the United States, through the modification of pre-existing systems. These modifications accommodated the collection and maintenance of additional data fields and established interfaces required to share data among DHS systems in support of entry processing at 115 airports and 14 seaports. Increment 1B (air and sea exit) involves the testing of exit devices to collect biometric exit data for select foreign nationals at 11 airports and seaports. Three exit alternatives were pilot tested: Kiosk—A self-service device (which includes a touch-screen interface, document scanner, finger scanner, digital camera, and receipt printer) that captures a digital photograph and fingerprint and prints out an encoded receipt. Mobile device—A hand-held device that is operated by a workstation attendant; it includes a document scanner, finger scanner, digital camera, and receipt printer and is used to capture a digital photograph and fingerprint. Validator—A hand-held device that is used to capture a digital photograph and fingerprint, which are then matched to the photograph and fingerprint captured via the kiosk and encoded in the receipt. Increment 2 focuses primarily on extending US-VISIT to land POEs. It is divided into three parts—2A, 2B, and 2C. Increment 2A (air, sea, and land) includes the capability to biometrically compare and authenticate valid machine-readable visas and other travel and entry documents issued by State and DHS to foreign nationals at all POEs. Increment 2A was deployed on October 23, 2005, according to program officials. It also includes the deployment by October 26, 2006, of technology to read biometrically enabled passports from visa waiver countries. Increment 2B (land entry) redesigns the Increment 1 entry solution and expands it to the 50 busiest land POEs. The process for issuing Form I-94 was redesigned to enable the electronic capture of biographic, biometric (unless the traveler is exempt), and related travel documentation for arriving travelers. This increment was deployed to the busiest 50 U.S. land border POEs as of December 29, 2004. Before Increment 2B, all information on the Form I-94s was handwritten. The redesigned systems electronically capture the biographic data included in the travel document. In some cases, the form is completed by CBP officers, who enter the data electronically and then print the form. Increment 2C is to provide the capability to automatically, passively, and remotely record the entry and exit of covered individuals using radio frequency (RF) technology tags at primary inspection and exit lanes. An RF tag that includes a unique ID number is to be embedded in each Form I-94, thus associating a unique number with a record in the US-VISIT system for the person holding that Form I-94. In August 2005, the program office deployed the technology to five border crossings (three POEs) to verify the feasibility of using passive RF technology to record traveler entries and exits via a unique ID number embedded in the CBP Form I-94. The results of this demonstration are to be reported in February 2006. Increment 3 extended Increment 2B (land entry) capabilities to 104 land POEs; this increment was essentially completed as of December 19, 2005. Increment 4 is the strategic US-VISIT program capability, which program officials stated will likely consist of a further series of incremental releases or mission capability enhancements that will support business outcomes. The program reports that it has worked with its prime contractor and partners to develop this overall vision for the immigration and border management enterprise. Increments 1 through 3 include the interfacing and integration of existing systems and, with Increment 2C, the creation of a new system, the Automated Identification Management System (AIDMS). The three main existing systems are as follows: The Arrival Departure Information System (ADIS) stores noncitizen traveler arrival and departure data received from air and arrival data captured by CBP officers at air and sea POEs, Form I-94 issuance data captured by CBP officers at Increment 2B departure information captured at US-VISIT biometric departure pilot (air and sea) locations, pedestrian arrival information and pedestrian and vehicle departure information captured at Increment 2C POE locations, and status update information provided by the Student and Exchange Visitor Information System (SEVIS) and the Computer Linked Application Information Management System (CLAIMS 3) (described below). ADIS provides record matching, query, and reporting functions. The passenger processing component of the Treasury Enforcement Communications System (TECS) includes two systems: Advance Passenger Information System (APIS), a system that captures arrival and departure manifest information provided by air and sea carriers, and the Interagency Border Inspection System, a system that maintains lookout data and interfaces with other agencies’ databases. CBP officers use these data as part of the admission process. The results of the admission decision are recorded in TECS and ADIS. The Automated Biometric Identification System (IDENT) collects and stores biometric data on foreign visitors. US-VISIT also exchanges biographic information with other DHS systems, including SEVIS and CLAIMS 3. These two systems contain information on foreign students and foreign nationals who request benefits, such as a change of status or extension of stay. Some of the systems previously described, such as IDENT and the new AIDMS, are managed by the program office, while some systems are managed by other organizational entities within DHS. For example, TECS is managed by CBP, SEVIS is managed by Immigration and Customs Enforcement, CLAIMS 3 is under United States Citizenship and Immigration Services, and ADIS is jointly managed by CBP and US-VISIT. US-VISIT also interfaces with other, non-DHS systems for relevant purposes, including watch list updates and checks to determine whether a visa applicant has previously applied for a visa or currently has a valid U.S. visa. In particular, US-VISIT receives biographic and biometric information from State’s Consular Consolidated Database as part of the visa application process, and returns fingerscan information and watch list changes. The US-VISIT program office structure includes nine component offices. Each of the program offices includes a director and subordinate organizational units, as established by the director. The responsibilities for each office are stated below. Figure 1 shows the program office structure, including its nine offices. The roles and responsibilities for each of the nine offices include the following: Chief Strategist is responsible for developing and maintaining the strategic vision, strategic documentation, transition plan, and business case. Budget and Financial Management is responsible for establishing the program’s costs estimates; analysis; and expenditure management policies, processes, and procedures that are required to implement and support the program by ensuring proper fiscal planning and execution of the budget and expenditures. Mission Operations Management is responsible for developing business and operational requirements based on strategic direction provided by the Office of the Chief Strategist. Outreach Management is responsible for enhancing awareness of US-VISIT requirements among foreign nationals, key domestic audiences, and internal stakeholders by coordinating outreach to media, third parties, key influencers, Members of Congress, and the traveling public. Information Technology Management is responsible for developing technical requirements based on strategic direction provided by the Office of the Chief Strategist and business requirements developed by the Office of Mission Operations Management. Implementation Management is responsible for developing accurate, measurable schedules and cost estimates for the delivery of mission systems and capabilities. Acquisition and Program Management is responsible for establishing and managing the execution of program acquisition and management policies, plans, processes, and procedures. Administration and Training is responsible for developing and administering a human capital plan that includes recruiting, hiring, training, and retaining a diverse workforce with the competencies necessary to accomplish the mission. Facilities and Engineering Management is responsible for establishing facilities and environmental policies, procedures, processes, and guidance required to implement and support the program office. In response to legislative mandate, we have issued four reports on DHS’s annual expenditure plans for US-VISIT. Our reports have, among other things, assessed whether the plans satisfied the legislative conditions and provided observations on the plans and DHS’s program management. As a result of our assessments, we made 24 recommendations aimed at improving both plans and program management, all of which DHS has agreed to implement. Of these 24 recommendations, 18 address risks stemming from program management. The current status of DHS’s implementation of our 18 recommendations on program risks is mixed, but progress in critical areas has been slow. For example, over 2 years have passed, and the program office has yet to develop a security plan consistent with federal guidance or to economically justify its investment in system increments. According to the Program Director, the pace of progress is attributable to competing demands on time and resources. DHS agreed to implement all 18 recommendations. Of these 18, DHS has completely implemented 2, has partially implemented 11, and is in the process of implementing another 5. Of the 11 that are partially implemented, 7 are about 2 years old, and 4 are about 10 to 19 months old. Of the 5 that are in progress, 3 are about 10 months old. These 18 recommendations are aimed at strengthening the program’s management effectiveness. The longer that the program takes to implement the recommendations, the greater the risk that the program will not meet its goals on time and within budget. Figure 2 provides an overview of the extent to which each recommendation has been implemented. The figure is followed by sections providing details on each recommendation and our assessment of its implementation status. In June 2003, we reported that the Immigration and Naturalization Service had not developed a security plan and performed a privacy impact assessment for the entry exit program (as US-VISIT was then known). A security plan and privacy impact assessment are important to understanding system requirements and ensuring that the proper safeguards are in place to protect system data and resources. System acquisition best practices and federal guidance advocate understanding and defining security and privacy requirements both early and continuously in a system’s life cycle, and effectively planning for their satisfaction. Accordingly, we recommended that DHS do the following: Develop and begin implementing a system security plan, and perform a privacy impact assessment and use the results of the analysis in near-term and subsequent system acquisition decision making. Since we made the system security plan recommendation about 2½ years ago, its implementation has been slow. For example, we reported in September 2003 and again in May 2004 that the program office had not developed a security plan. In February 2005, we reported that the program office had developed a security plan, dated September 2004, and that this plan was generally consistent with federal guidance. That is, the plan provided an overview of system security requirements, described the controls in place or planned for meeting those requirements, referred to the applicable documents that prescribe the roles and responsibilities for managing the US-VISIT component systems, and addressed security awareness and training. However, the program office had not conducted a risk assessment or included in the plan when an assessment would be completed. According to guidance from the Office of Management and Budget (OMB), the security plan should describe the methodology that is used to identify system threats and vulnerabilities and to assess risks, and it should include the date the risk assessment was completed. According to program officials, they completed a programwide risk assessment in December 2005, but have yet to provide a copy of the assessment to us. Therefore, we cannot confirm that the assessment has been done, and done properly. The absence of a risk assessment and a security plan that reflects this assessment is a significant program weakness. Risk assessments are critical to establishing effective security controls because they provide the basis for establishing appropriate policies and selecting cost-effective controls to implement these policies. Without such an assessment, US-VISIT does not have adequate assurance that it knows the risks associated with the program and thus whether it has implemented effective controls to address them. Notwithstanding these limitations in the security plan, the program office has begun to implement aspects of its September 2004 security plan. For example, the Information Systems Security Manager told us that a security awareness program is established and key personnel have attended security training. Since June 2003, US-VISIT has also developed and periodically updated a privacy impact assessment. An initial impact assessment was issued in January 2004, and a revised assessment was issued in September 2004. A more recent assessment, dated July 2005, reflects changes related to Increments 1B and 2C. Each of these assessments is generally consistent with OMB guidance. That is, each of the assessments addressed most OMB requirements, including the impact that the system will have on individual privacy, the privacy consequences of collecting the information, and alternatives considered to collect and handle information. The most recent impact assessment, for example, states that three alternatives were considered for Increment 1B—the kiosk, the mobile device, and the validator (a combination of the two)—and discusses proposals to mitigate the privacy risks of all three, such as by limiting the duration of data retention on the exit devices and using encryption. However, OMB guidance also requires that privacy impact assessments developed for systems under development address privacy in relevant system documentation, including statements of need, functional requirements documents, and cost-benefit analyses. As we reported about previous privacy impact assessments, privacy is only partially addressed in system documentation. For example, the Increment 1B cost-benefit analysis assesses the privacy risk associated with each exit alternative, and the Increment 2C business requirements state that all solutions are to be compliant with privacy laws and regulations and adhere to US-VISIT privacy policy. However, we did not find privacy in the Increment 1B business requirements or the Increment 2C functional requirements. Program officials, including the US-VISIT Privacy Officer, acknowledged that privacy is not included in the system documentation, but stated that privacy is considered in the development of the documentation and that the privacy office reviews key system documentation at relevant times during the system development life cycle. Nevertheless, we did not find evidence of privacy being addressed in the system documentation, and program officials acknowledged that it was not included. Until the program performs a risk assessment and fully implements a security plan that reflects this assessment, it cannot adequately ensure that US-VISIT is cost-effectively safeguarding assets and data. Moreover, without reflecting privacy in system documentation, it cannot adequately ensure that privacy needs are being fully addressed. We reported in September 2003 that the program office had not defined key acquisition management controls to support the acquisition of US-VISIT, and therefore its efforts to acquire, deploy, operate, and maintain system capabilities were at risk of not satisfying system requirements and of not meeting benefit expectations on time and within budget. The Capability Maturity Model–Integration® (CMMI) developed by Carnegie Mellon University’s Software Engineering Institute (SEI) explicitly defines process management controls that are recognized hallmarks of successful organizations and that, if implemented effectively, can greatly increase the chances of successfully acquiring software-intensive systems. SEI’s CMMI model uses capability levels to assess process maturity. Because establishing the basic acquisition process capabilities, according to SEI, can take on average about 19 months, we recognized the importance of starting early to build effective acquisition management capabilities by recommending that DHS do the following: Develop and implement a plan for satisfying key acquisition management controls, including acquisition planning, solicitation, requirements management, program management, contract tracking and oversight, evaluation, and transition to support, and implement the controls in accordance with SEI guidance. The program office has recently taken foundational steps to establish key acquisition management controls. For example, it has developed a process improvement plan, dated May 16, 2005 (about 20 months after our recommendation), to define and implement these controls. As part of its improvement program, the program office is implementing a governance structure for overseeing improvement activities, consisting of three groups: a Management Steering Group, an Enterprise Process Group, and Process Action Teams. Specific roles for each of these groups are described below. The Management Steering Group is to provide policy and procedural guidance and to oversee the entire improvement program. The steering group is chaired by the US-VISIT Director, with the Deputy Director and the functional office directors serving as core members. The Enterprise Process Group is to provide planning, management, and operational guidance in day-to-day process improvement activities. The group is chaired by the process improvement leader and is composed of individuals from each functional office. Process Action Teams are to provide specific process documentation and to provide implementation support and training services. These teams are to be active as long as a particular process improvement initiative is under way. To date, the program office has chartered five process teams—configuration management, cost analysis, process development, communications, and policy. In addition, the program office has recently completed a self-assessment of its acquisition process maturity, and it plans to use the assessment results to establish a baseline of its acquisition process maturity for improvement. According to program officials, the assessment included 13 key process areas that are generally consistent with the process areas cited in our recommendation. The program has ranked these 13 process areas according to their priority, and, for initial implementation, it plans to focus on the following 6: Configuration management. Establishing and maintaining the integrity of the products throughout their life cycle. Process and product quality assurance. Taking actions to provide management with objective insight into the quality of products and processes. Project monitoring and control. Tracking the project’s progress so that appropriate corrective actions can be taken when performance deviates significantly from plans. Project planning. Establishing and maintaining plans for work activities. Requirements management. Managing the requirements and ensuring a common understanding of the requirements between the customer and the product developers. Risk management. Identifying potential problems before they occur so that they can be mitigated to minimize any adverse impact. The improvement plan is currently being updated to reflect the results of the baseline assessment and to include a detailed work breakdown structure, process prioritization, and resource estimates. According to the Director, Acquisition and Program Management Office (APMO), the goal is to conduct a formal SEI appraisal to assess the capability level of some or all of the six processes by October 2006. Notwithstanding the recent steps to begin addressing our recommendation, much work remains to fully implement key acquisition management controls. Moreover, effectively implementing these controls takes considerable time. Therefore, it is important that these improvement efforts stay on track. Until these processes are effectively implemented, US-VISIT will be at risk of not delivering promised capabilities on time and within budget. In September 2003, we reported that the program had not assessed the costs and benefits of Increment 1, which is extremely important because the decision to invest in any capability should be based on reliable analyses of return on investment. Further, according to OMB guidance, individual increments of major systems are to be individually supported by analyses of benefits, cost, and risk. Without reliable analyses, an organization cannot adequately know that a proposed investment is a prudent and justified use of limited resources. Accordingly, we recommended that DHS do the following: Determine whether proposed US-VISIT increments will produce mission value commensurate with cost and risks and disclose to the Congress planned actions. As we reported in September 2003 and again in February 2005, the program office did not justify its planned investment in Increments 1 and 2B, respectively, based on expected return on investment. Since then, the program has developed a cost-benefit analysis for Increment 1B. OMB has issued guidance concerning the analysis needed to justify investments. According to this guidance, such analyses should meet certain criteria to be considered reasonable. These criteria include, among other things, comparing alternatives on the basis of net present value and conducting uncertainty analyses of costs and benefits. DHS has also issued guidance on such economic analyses that is consistent with that of OMB. The latest cost-benefit analysis for Increment 1B (dated June 23, 2005) identifies potential costs and benefits for three exit solutions at air and sea POEs and provides a general rationale for the viability of the three alternatives described. This latest analysis meets four of eight OMB economic analysis criteria. However, it does not, for example, include a complete uncertainty analysis (i.e., both a sensitivity analysis and a Monte Carlo simulation) for the three exit alternatives evaluated. That is, the cost-benefit analysis does include a Monte Carlo simulation, but it does not include a sensitivity analysis for the three alternatives. An analysis of uncertainty is important because it provides decision makers with a perspective on the potential variability of the cost and benefit estimates should the facts, circumstances, and assumptions change. Table 1 summarizes our analysis of the extent to which US-VISIT’s June 23, 2005, cost-benefit analysis for Increment 1B satisfies eight OMB criteria. It is important that the program adhere to relevant guidance in developing its incremental cost-benefit analyses. If this is not done, the reliability of the analyses is diminished, and an adequate basis for prudent investment decision making does not exist. Moreover, if the mission value of a proposed investment is not commensurate with costs, it is vital that this information be fully disclosed to DHS and congressional decision makers. The underlying intent of our recommendation is that this information be available to inform such decisions. In September 2003, we reported that key aspects of the larger homeland security environment in which US-VISIT would need to operate had not been defined. For example, we stated that certain policy and standards decisions had not been made (e.g., whether official travel documents will be required for all persons who enter and exit the country, including U.S. and Canadian citizens, and how many fingerprints are to be collected). In the absence of this operational context, program officials were making assumptions and decisions that, if they proved inconsistent with subsequent policy or standards decisions, would require US-VISIT rework. To minimize the impact of these changes, we recommended that DHS do the following: Clarify the operational context in which US-VISIT is to operate. After about 27 months, defining this operational context remains a work in progress. According to the Chief Strategist, an immigration and border management strategic plan was drafted in March 2005 that shows how US-VISIT is aligned with DHS’s organizational mission and defines an overall vision for immigration and border management. This official stated that this vision provides for an immigration and border management enterprise that unifies multiple internal departmental and other external stakeholders with common objectives, strategies, processes, and infrastructures. Since the plan was drafted, DHS has reported that other relevant initiatives have been undertaken, such as the Security and Prosperity Partnership of North America and the Secure Border Initiative. The Security and Prosperity Partnership is to, among other things, establish a common approach to securing the countries of North America— the United States, Canada, and Mexico—by, for example, implementing a border facilitation strategy to build capacity and improve the legitimate flow of people and cargo at our shared borders. The Secure Border Initiative is to implement a comprehensive approach to securing our borders and reducing illegal immigration. According to the Chief Strategist, while portions of the strategic plan are being incorporated into these initiatives, these initiatives and their relationship with US-VISIT are still being defined. We have yet to receive the US-VISIT strategic plan because, according to program officials, it had not yet been approved by DHS management. Until US-VISIT’s operational context is fully defined, DHS is increasing its risk of defining, establishing, and implementing a program that is duplicative of other programs and not interoperable with them. This in turn will require rework to address these areas. While this issue was significant 27 months ago, when we made the recommendation, it is still more significant now. We reported in September 2003 that the program had not fully staffed its program office. Our prior experience with major acquisitions like US-VISIT shows that to be successful, they need, among other things, to have adequate resources. Accordingly, we recommended that DHS do the following: Ensure that human capital and financial resources are provided to establish a fully functional and effective program office. About 2 years later, US-VISIT had filled 102 of its 115 planned government positions and all of its planned 117 contractor positions. For the remaining 13 government positions, 5 positions had been selected (pending completion of security clearances), and recruitment action was in process for filling the remaining 8 vacancies. According to the Office of Administration and Training Manager, funding is available to complete the hiring of all 115 government employees. Notwithstanding this progress, in February 2005, US-VISIT completed a workforce analysis and requested additional positions based on the results. According to program officials, a revised analysis was submitted in the summer of 2005, but the request has not yet been approved. Figure 3 shows the program office organization structure and functions and how many of the 115 positions needed have been filled. Securing necessary resources will be a continuing challenge and an essential ingredient to the program’s ability to acquire, deploy, operate, and maintain system capabilities on time and within budget. We reported in September 2003 that the program had not defined specific roles and responsibilities for its staff. Our prior experience and leading practices show that for major acquisitions like US-VISIT to be successful, program staff need, among other things, to understand what they are to do, how they relate to each other, and how they fit in their organization. Accordingly, we recommended that DHS do the following: Define program office positions, roles, and responsibilities. The program office has developed charters for its nine component offices that include roles and responsibilities for each. For example, the Acquisition and Program Management Office is responsible, among other things, for establishing acquisition and program management policies; coordinating development of configuration management plans and project schedules, including the integrated milestone schedule; and developing policies and procedures for guidance and oversight of systems development and implementation activities. The program has also defined a set of core competencies (knowledge, skills, and abilities) for each position. For example, it has defined critical competencies for program and management analysts that include, among others, flexibility, interpersonal skills, organizational awareness, oral communication, problem solving, and teamwork. These efforts to define position, roles, and responsibilities should help in managing the program effectively. As previously stated, we reported in September 2003 that US-VISIT had not fully staffed its program office or defined roles and responsibilities for its program staff. We observed that prior research and evaluations of organizations showed that effective human capital management can help agencies establish and maintain the workforce they need to accomplish their missions. Accordingly, we recommended that DHS do the following: Develop and implement a human capital strategy for the program office that provides for staffing positions with individuals who have the appropriate knowledge, skills, and abilities. In February 2005, we reported that the program office, in conjunction with the Office of Personnel Management (OPM), developed a draft human capital plan that employed widely accepted human capital planning tools and principles. The draft plan included, for example, an action plan that identified activities, proposed completion dates, and the office (OPM or the program office) responsible for the action. We also reported that the program office had completed some of the activities, such as designating a liaison responsible for ensuring alignment between departmental and program human capital policies. Since then, the program office has finalized the human capital plan and completed more activities. For example, program officials told us that they have analyzed the program office’s workforce to determine diversity trends, retirement and attrition rates, and mission-critical and leadership competency gaps; updated the program’s core competency requirements to ensure alignment between the program’s human capital and business needs; developed an orientation program for new employees; and administered competency assessments to incoming employees. Program officials also told us that they have plans to complete other activities, such as developing a staffing forecast to inform succession planning; analyzing workforce data to maintain strategic focus on preserving the skills, knowledge, and leadership abilities required for the US-VISIT program’s success; and developing organizational leadership competency models for the program’s senior executive, managerial, and supervisory levels. In addition, the officials said that several activities in the plan have not been completed, such as assessing the extent of any current employees’ competency gaps and developing a competency-based listing of training courses. These officials said that the reason these activities have not been completed is that they are related to the department’s new human capital initiative, MAXHR, which is to provide greater flexibility and accountability in the way employees are paid, developed, evaluated, afforded due process, and represented by labor organizations. MAXHR is to include the development of departmentwide competencies. Because of this, the officials told us that it could potentially impact the program’s ongoing competency-related activities. As a result, these officials said that they are coordinating these activities closely with the department as it develops and implements this new initiative, which is currently being reviewed by the DHS Deputy Secretary for approval. Until US-VISIT fully implements a comprehensive human capital strategy, it will continue to risk not having staff with the right skills and abilities to successfully execute the program. We reported in September 2003 that the operational performance of initial system increments was largely dependent on the performance of existing systems that were to be interfaced to create these increments. For example, we said that the performance of an increment will be constrained by the availability and downtime of the existing systems that it includes. Accordingly, we recommended that DHS do the following: Define performance standards for each increment that are measurable and reflect the limitations imposed by relying on existing systems. In February 2005 (17 months later), we reported that several technical performance standards for Increments 1 and 2B had been defined, but that it was not clear that these standards reflected the limitations imposed by the reliance on existing systems. Since then, for the Increment 2C Proof of Concept (Phase 1), the program office has defined certain other performance standards. For example, the functional requirements document for Increment 2C (Phase 1) defines several technical performance standards, including reliability, recoverability, and availability. For each, the document states that the performance standard is largely dependent on those of Increment 2B. More specifically, the document states that Phase 1 system availability is largely dependent upon the individual and collective availability of the current systems. The document also states that the Increment 2C components shall have an aggregated availability greater than or equal to 97.5 percent. However, the document does not contain sufficient information to determine whether these performance standards actually reflect the limitations imposed by reliance on existing systems. To further develop performance standards, the program office has prepared a Performance Engineering Plan, dated March 31, 2005, that links US-VISIT performance engineering activities to its System Development Life Cycle. Further, the plan (1) provides a framework to be used to align its business, application, and infrastructure performance goals and measures; (2) describes an approach to translate business goals into operational measures, and then to quantitative metrics; and (3) identifies system performance measurement areas (effectiveness, efficiency, reliability, and availability). According to program officials, they intend to establish a group to develop action plans for implementing the engineering plan, but did not have a time frame for doing so. Without defining performance standards that reflect the limitations of the existing systems upon which US-VISIT relies, the program lacks the ability to identify and effectively address performance shortfalls. In September 2003, we reported that US-VISIT was a risky undertaking because of several factors inherent to the program, such as its large scope and complexity, as well as because of various program management weaknesses. We concluded that these risks, if not effectively managed, would likely cause program cost, schedule, and performance problems. Risk management is a continuous, forward-looking process that is intended either to prevent such problems from occurring or to minimize their impact if they occur by proactively identifying risks, implementing risk mitigation strategies, and measuring and disclosing progress in doing so. Because of the importance of effectively managing program risks, we recommended that DHS do the following: Develop and implement a risk management plan and ensure that all high risks and their status are reported regularly to the executive body. About 2 years later, the program office has developed and has begun implementing a risk management plan. The plan, which was approved in September 2005, includes, among other things, a process for identifying, analyzing, handling, and monitoring risk. It also defines the governance structure to be used in overseeing and managing the process. The program also maintains a risk database, which includes, among other things, a description of the risk, its priority (e.g., high, medium, or low), and its mitigation strategy. According to program officials, the database is currently available to program management and staff. The program has also begun implementing its risk management plan. For example, it has established a Risk Review Board, Risk Review Council, and Risk Owners to govern its risk activities. The roles and responsibilities are described below. The Risk Review Board directs all risk governance within the program and provides the mechanism to escalate/transfer the consideration of risks to program governing boards and to organizations external to the program. The Risk Review Council oversees and manages program-related risks that are significant, controversial, or cross-project or that may require escalation to the Risk Review Board. Risk Owners analyze, handle, and monitor risks. However, full implementation of the risk management plan has yet to occur. As part of its CMMI process maturity baseline self-assessment (previously discussed), the program office found that the risk management process detailed in its plan was not being consistently applied across the program. In response, according to program officials, they have developed risk management training and began conducting training sessions in November 2005. These officials also stated that the Risk Review Board, where risks are reviewed with program executives, has been meeting monthly since September 2005. With respect to regular risk reports to program executives, the plan includes thresholds for escalating risks within the risk governance structure and to DHS governance entities. For example, risks are to be elevated to the Risk Review Board when the cost of the project exceeds more than 5 percent of the project baseline cost, the schedule slippage exceeds more than 5 percent of the baseline schedule, major areas of scope are affected, or quality reduction requires approval. However, program officials stated that these thresholds are not currently being applied. They further stated that although the plan allows for escalation of risks to officials outside the program office, doing so is at the discretion of the Program Director; in addition, according to these officials, although high risks are not routinely escalated outside the program, selected high risks have been disclosed to the Assistant Secretary for Policy in weekly program status reports. As of December 5, 2005, the Program Director proposed submitting monthly reports of high-priority risks and issues through the Assistant Secretary for Policy to the Deputy Secretary. Until US-VISIT fully implements its risk management plan and process, it cannot be assured that all program risks are being identified and managed in order to effectively mitigate any negative impact on the program’s ability to deliver promised capabilities on time and within budget. We reported in May 2004, and again in February 2005, that system testing was not based on well-defined test plans, and thus the quality of testing being performed was at risk. The purpose of system testing is to identify and correct system defects (i.e., unmet system functional, performance, and interface requirements) and thereby obtain reasonable assurance that the system performs as specified before it is deployed and operationally used. To be effective, testing activities should be planned and implemented in a structured and disciplined fashion. Among other things, this includes developing effective test plans to guide the testing activities and ensuring that test plans are developed and approved before test execution. According to relevant systems development guidance, an effective test plan (1) specifies the test environment; (2) describes each test to be performed, including test controls, inputs, and expected outputs; (3) defines the test procedures to be followed in conducting the tests; and (4) provides traceability between the test cases and the requirements to be verified by the testing. Because these criteria were not being met, we recommended that DHS do the following: Develop and approve test plans before testing begins that (1) specify the test environment; (2) describe each test to be performed, including test controls, inputs, and expected outputs; (3) define the test procedures to be followed in conducting the tests; and (4) provide traceability between test cases and the requirements to be verified by the testing. About 19 months later, the quality of the system test plans, and thus system testing, is still problematic. To the program’s credit, the test plans for the Increment 2C Proof of Concept (Phase 1), dated June 28, 2005, satisfied part of our recommendation. Specifically, the test plan for this increment was approved on June 30, 2005, and, according to program officials, testing began on July 5, 2005. Further, the test plan described, for example, the scope, complexity, and completeness of the test environment, and it described the tests to be performed, including a high-level description of controls, inputs, and outputs, and it identified test procedures to be performed. However, the test plan did not adequately trace between test cases and the requirements to be verified by testing. For example, 300 of the 438 functional requirements, or about 70 percent of the requirements that we analyzed, did not have specific references to test cases. In addition, we identified traceability inconsistencies, including the following: One requirement was mapped to over 50 test cases, but none of the 50 cases referenced the requirement. One requirement was mapped to a group of test cases in the traceability matrix, but several of the test cases to which the requirement was mapped did not reference the requirement, and several test cases referenced the requirement and were not included in the traceability matrix. One requirement was mapped to all but one of the test cases within a particular group of test cases, but that test case did refer to the requirement. Time and resources were identified as the reasons that test plans have not been complete. Specifically, program officials stated that milestones do not permit existing testing/quality personnel the time required to adequately review testing documents. According to these officials, even when the start of testing activities is delayed because, for example, requirements definition or product development takes longer than anticipated, testing milestones are not extended. Without complete test plans, the program does not have adequate assurance that the system is being fully tested, and thus unnecessarily assumes the risk that system defects will not be detected and addressed before the system is deployed. This means that the system may not perform as intended when deployed, and defects will not be addressed until late in the systems development cycle, when they are more difficult and time-consuming to fix. As we previously reported, this has happened: postdeployment system interface problems surfaced for Increment 1, and manual work-arounds had to be implemented after the system was deployed. We reported in May 2004 that the program had not assessed its workforce and facility needs for Increment 2B. Because of this, we questioned the validity of the program’s workforce and facility assumptions used to develop its workforce and facility plans, noting that the program lacked a basis for determining whether its assumptions and thus its plans were adequate. Accordingly, we recommended that DHS do the following: Assess the full impact of Increment 2B on land POE workforce levels and facilities, including performing appropriate modeling exercises. Seven months later, the program office evaluated Increment 2B operational performance. The purpose of the evaluation was to determine the effectiveness of Increment 2B performance at the 50 busiest land POEs. To assist in the evaluation, the program office established a baseline for comparing the average Form I-94 or Form I-94W issuance processing times at 3 of the 50 POEs where processing times were to be evaluated. The program office then conducted two evaluations of the processing times at the 3 POEs following Increment 2B deployment. The first was in December 2004, after Increment 2B was deployed to these sites as a pilot, and the second was in February 2005, after Increment 2B was deployed to all 50 POEs. The evaluation results showed that the average processing times decreased for all 3 sites. Table 2 compares the results of the two evaluations and the baseline. According to program officials, these evaluations supported the workforce and facilities planning assumption that no additional staff were required to support deployment of Increment 2B, and that minimal modifications to interior workspace were required to accommodate biometric capture devices and printers and to install electrical circuits. These officials stated that modifications to existing officer training and interior space were the only changes needed. However, the scope of the evaluation was too limited to satisfy the evaluation’s stated purpose or our recommendation for assessing the full impact of Increment 2B. Specifically, program officials stated that the evaluation focused on the time to process Form I-94s and not on operational effectiveness, including workforce impacts and traveler waiting time. Second, the 3 sites were selected, according to program officials, on the basis of a number of factors, including whether the sites already had sufficient staff to support the pilot. Selecting sites on the basis of this factor could affect the results and presupposes that not all POEs have the staff needed to support Increment 2B. Third, evaluation conditions were not always held constant. For example, fewer workstations were used to process travelers in establishing the baseline processing times at 2 of the POEs—Port Huron (9 versus 14) and Douglas (4 versus 6)—than were used during the pilot evaluations. Moreover, CBP officials from 1 POE, which was not an evaluation site, told us that US-VISIT has actually lengthened processing times. (San Ysidro processes the highest volume of travelers of all land POEs.) While these officials did not provide specific data to support this statement, it nevertheless raises questions about the potential impact of Increment 2B on the 47 sites that were not evaluated. It is important that the impact of Increment 2B on workforce and facilities be fully assessed. Since we made our recommendation, Increment 2B deployment and operational facts and circumstances have materially changed, making the implementation of our recommendation using predeployment baseline data for the other 47 sites impractical. Nevertheless, other alternatives, such as surveying officials at these sites to better understand the increment’s impact on workforce levels and facilities, have yet to be explored. Until they are, the program may not be able to accurately project resource needs or make required modifications to achieve its goals of minimizing US-VISIT’s impact on POE processing times. We reported in May 2004 that US-VISIT had not established effective configuration management practices. Configuration management establishes and maintains the integrity of system components and items (e.g., hardware, software, and documentation). A key ingredient is a change control board to evaluate and approve proposed configuration changes. Accordingly, we concluded that the program did not have adequate assurance that approved system changes were actually made, and that changes made to the component systems (for non–US-VISIT purposes) did not interfere with US-VISIT functionality. Accordingly, we recommended that DHS do the following: Implement effective configuration management practices, including establishing a US-VISIT change control board to manage and oversee system changes. After 19 months, US-VISIT has begun implementing configuration management practices. To its credit, the program recently issued a configuration management policy (September 2005) and prepared a draft configuration management plan (August 2005). The policy contains guiding principles, direction, and expectations for planning and performing configuration management, and includes activities, authorities, and responsibilities. The draft plan describes the configuration management governance structure, including organizational entities and their responsibilities, the processes and procedures to be applied, and how controls are to be applied to products. The governance structure includes the Executive Configuration Control Board and the Configuration Management Impact Review Team. According to its charter, the configuration control board is responsible for determining the status of requested configuration changes and resolving any conflicts related to those changes for US-VISIT–managed systems (i.e., not for US-VISIT component systems managed by other DHS organizations). The Impact Review Team, which reports to the board, is responsible for reviewing requests for system changes and submitting a recommendation to the appropriate change review authority (i.e., either the US-VISIT control board or the control board in the DHS organization that manages the component system). According to program officials, for US-VISIT–managed systems, the review authority is the Executive Configuration Control Board. For other systems, such as TECS (which CBP manages), the US-VISIT review team may submit a recommendation to the appropriate control board (in this case, the CBP Control Board). The APMO director stated that the planned configuration management program is intended to complement rather than replace the configuration management programs for the legacy systems. That is, change requests approved by the US-VISIT Executive Configuration Control Board that require changes to a legacy system will be coordinated with the board having responsibility for that system. This means, however, that changes to component systems (e.g., IDENT, ADIS, and TECS) that are initiated and approved by another DHS organization, and that could affect US-VISIT performance, are not subject to US-VISIT configuration management processes and are not also being examined and approved by the US-VISIT control board. This lack of US-VISIT control was the impetus for our recommendation. Although US-VISIT has recently taken steps to begin addressing our recommendation, the program still does not adequately control changes to the component systems upon which US-VISIT performance depends. Until programwide configuration management practices are implemented, the program does not have an effective means for ensuring that approved system changes are actually made and that changes made to the component systems for non–US-VISIT purposes do not compromise US-VISIT functionality and performance. We reported in May 2004 that the program office’s independent verification and validation (IV&V) contractor was not independent of the products and processes that it was verifying and validating. The purpose of IV&V is to provide management with objective insight into the program’s processes and associated work products. Its use is a recognized best practice for large and complex system development and acquisition projects like US-VISIT. To be effective, the verification and validation function is to be performed by an entity that is independent of the processes and products that are being reviewed. Accordingly, we recommended that DHS do the following: Ensure the independence of the IV&V contractor. In July 2005, the program office issued a new contract for IV&V services. To ensure the contactor’s independence, the program office (1) required that IV&V contract bidders be independent of the development and integration contractors; (2) reviewed each of the bidder’s affiliations with the prime contract; (3) included provisions in the contract that prohibit the contractor from soliciting, proposing, or being awarded work (other than IV&V services) for the program; (4) required all contractor personnel to certify that they do not have any conflicts of interest; and (5) ensured that the contractor’s management plan (Oct. 17, 2005) describes how the contractor will ensure technical, managerial, and financial independence. Such steps, if effectively enforced, should adequately ensure that verification and validation activities are performed in an objective manner and, thus, should provide valuable assistance to program managers and decision makers. We reported in May 2004 that US-VISIT’s overall progress on implementing our recommendations had been slow, and considerable work remained to fully address them. As we also noted, given that most of our recommendations focused on fundamental limitations in US-VISIT’s ability to manage the program, it was important to implement the recommendations quickly and completely. Accordingly, we recommended that DHS do the following: Develop a plan, including explicit tasks and milestones, for implementing all of our open recommendations and periodically report to the DHS Secretary and Under Secretary on progress in implementing this plan; and report this progress, including reasons for delays, in all future expenditure plans. About 19 months after our recommendation, the program assigned responsibility to specific individuals for preparing a plan, including specific actions and milestones, to address each recommendation. In addition, it developed a report that identifies the responsible person for each recommendation and summarizes progress made in implementing each. The program office provided this report for the first time to the DHS Deputy Secretary on October 3, 2005, and plans to forward subsequent reports every 6 months. However, the report’s description of progress on 4 recommendations is inconsistent with our assessment, as discussed below: First, the report states that the program completed a privacy impact assessment that is in full compliance with OMB guidance. As previously discussed, an assessment has been developed, but OMB guidance requires that these assessments for systems under development (such as Increment 2C) address privacy in the system’s documentation. Increment 2C systems documentation does not address privacy and therefore is not fully compliant with OMB guidance. Second, the report states that a human capital strategy has been completed. However, as previously discussed, several of the activities in the human capital plan have yet to be implemented. For example, the program has not developed a staffing forecast to inform succession planning. Third, the report states that the impact of Increment 2B on land POE workforce levels and facilities has been fully assessed. However, as we previously stated, the scope of the evaluations was not sufficient to satisfy our recommendation. For example, program officials stated that the evaluation focused on the time to process Form I-94s and not on operational effectiveness, including workforce impacts and traveler waiting time. Moreover, officials at the largest land POE told us that the effect of Increment 2B was the opposite of that reported in the pilot results. Fourth, the report states that the program has partially completed implementing configuration management practices. However, as previously discussed, the program office has yet to implement practices or establish a configuration control board with authority over all changes affecting US-VISIT functionality and performance, including those made to component systems for non–US-VISIT purposes, which was the intent of our recommendation. In addition, the report does not specifically describe progress against 11 of our other recommendations, so that we could not determine whether the program’s assessment is consistent with ours (described in this report). For example, we recommended that the program reassess plans for deploying an exit capability to ensure that the scope of the exit pilot provides for adequate evaluation of alternative solutions. The report states that the program office has completed exit testing and has forwarded the exit evaluation report to the Deputy Secretary for a decision. However, it does not state whether the program office had expanded the scope or time frames of the pilot. Fully understanding and disclosing progress against our recommendations are essential to building the capability needed to effectively manage the program, and to ensuring that key decision makers have the information needed to make well-informed choices among competing investment options. We reported in February 2005 that US-VISIT had not followed effective practices to develop cost estimates for its system increments, and thus the reliability of its cost estimates was questionable. Such cost-estimating practices are embedded in the 13 criteria in SEI’s checklist for determining the reliability of cost estimates. Of these 13 criteria, we reported in February 2005 that the program’s cost estimate met 2, partially met 6, and did not meet 5. Accordingly, we recommended that DHS do the following: Follow effective practices for estimating the costs of future increments. The latest US-VISIT–related cost estimate is for Increment 1B. This estimate is in the June 2005 cost-benefit analysis for Increment 1B and establishes the costs associated with three exit solutions for air and sea POEs. As was the case for the estimate described in our February 2005 report, this latest estimate also did not meet all 13 criteria, meeting 3 and partially meeting another 5. For example, these estimates did not include a detailed work breakdown structure and omitted important cost elements, such as system testing. A work breakdown structure serves to organize and define the work to be performed, so that associated costs can be identified and estimated. Thus, it provides a reliable basis for ensuring that the estimates include all relevant costs. In addition, the uncertainties associated with the Increment 1B cost estimate were not identified. An uncertainty analysis provides the basis for adjusting these estimates to reflect unknown facts and circumstances that could affect costs and identifies the risk associated with the cost estimate. Table 3 summarizes our analysis of the extent to which US-VISIT’s Increment 1B cost estimates satisfy SEI’s 13 criteria. Program officials stated that they recognize the importance of developing reliable cost estimates and have initiated actions to more reliably estimate the costs of future increments. For example, as part of its process improvement program, the program has chartered a cost-analysis process action team, which is to develop, document, and implement a cost-analysis policy, process, and plan for the program. Program officials also stated that they have hired additional contracting staff with cost-estimating experience. Strengthening the program’s cost-estimating capability is extremely important. The absence of reliable cost estimates, among other things, prevents the development of reliable economic justification for program decisions and impedes effective performance measurement. In February 2005, we reported that US-VISIT had not adequately planned for evaluating the Increment 1B exit alternative because its exit pilot evaluation’s scope and timeline were compressed. Accordingly, we recommended that DHS do the following: Reassess plans for deploying an exit capability to ensure that the scope of the exit pilot provides for adequate evaluation of alternative solutions and better ensures that the exit solution selected is in the best interest of the program. Over the last 10 months, the program office has taken actions to expand the scope and time frames of the pilot. For example, it extended the pilot from 5 to 11 POEs—9 airports and 2 seaports. It also extended the time frame for data collection and evaluation to April 2005, which is about 7 months beyond the date for which all exit pilot evaluation tasks were to be completed. Further, according to program officials, they achieved the target sample sizes necessary to have a 95 percent confidence level. Notwithstanding the expanded scope of the pilot, questions remain about whether the exit alternatives have been evaluated sufficiently to permit selection of the best exit solution for national deployment. For example, each of the three exit alternatives was evaluated against three criteria, including compliance with the US-VISIT exit process (i.e., foreign travelers providing information as they exit the United States). However, across the three alternatives, the average compliance with this process was only 24 percent, which raises questions as to the effectiveness of the three alternatives. The evaluation report cites several reasons for the low compliance rate, including that compliance during the pilot was voluntary. The report further concludes that national deployment of the exit solution will not have the desired compliance rate unless the exit process incorporates an enforcement mechanism, such as not allowing persons to reenter the United States if they do not comply with the exit process. Although an enforcement mechanism might indeed improve compliance, program officials stated that no formal evaluation has been conducted of enforcement mechanisms or their effect on compliance. The program director stated that he agrees that additional evaluation is needed to assess the impact of implementing potential enforcement mechanisms and plans to do so. Until the program office adequately evaluates the exit alternatives and knows whether the alternative to be selected will be effective, the program office will not be in a position to select the exit solution that is in the best interest of the program. This is very important because without an effective exit capability, the benefits and the mission value of US-VISIT are greatly diminished. We reported in February 2005 that the overall capacity of the system was not being effectively managed. At that time, US-VISIT, which comprises several legacy systems, was relying on the capacity management activities of these systems. It was not focused on the capacity requirements and performance of the collective systems that make up US-VISIT. This approach increases the risk that the system may not be properly designed and configured for efficient performance, and that it has insufficient processing and storage capacity for current, future, and unpredictable workload requirements. Accordingly, we recommended that DHS do the following: Develop and implement processes for managing the capacity of the US-VISIT system. According to program officials, they have initiated efforts to develop a capacity management process, including a high-level description of the necessary steps, such as identifying tools needed to implement the process. However, a plan, including specific tasks and milestones for developing and implementing capacity management processes, has not yet been developed. Until the program office develops a programwide capacity management program, it increases the risk that US-VISIT may not be able to adequately support program mission needs. We reported in February 2005 that the program office recognized that US-VISIT and the Automated Commercial Environment (ACE) have related missions and operational environments. In addition, US-VISIT and ACE could potentially develop, deploy, and use common information technology infrastructures and services. We also reported that managing this relationship has not been a priority. Accordingly, we recommended that DHS do the following: Make understanding the relationships and dependencies between the US-VISIT and ACE programs a priority matter, and report periodically to the Under Secretary on progress in doing so. US-VISIT and ACE managers met in February 2004, to identify potential areas for collaboration between the two programs and to clarify how the programs could best support the DHS mission and provide officers with the information and tools they need. According to program officials, they have established a US-VISIT/ACE integrated project team to, among other things, ensure that the two programs are programmatically and technically aligned. The team has discussed potential areas of focus and agreed to three areas: RF technology, program control, and data governance. However, it does not have an approved charter, and it has not developed explicit plans or milestone dates for identifying the dependencies and relationships between the two programs. Program officials stated that the team has met three times and plans to meet on a quarterly basis going forward. It is important that the relationships and dependencies between these two programs be managed effectively. The longer it takes for the programs to understand and exploit their relationships, the more rework will be needed at a later date to do so. Over the last 3 years, we have made recommendations aimed at correcting fundamental limitations in US-VISIT’s program management ability and thereby better ensuring the delivery of mission capability and value on time and commensurate with costs. While progress on the implementation of the recommendations is mixed, progress in critical areas has been slow. As with any program, introducing and institutionalizing the program management and accountability discipline at which our recommendations are aimed require investing time and resources while continuing to meet other program demands. In making such investment choices, it is important to remember that institutionalizing such program discipline in the near term will produce long-term payback in a program’s ability to meet these other demands. Accordingly, the longer that US-VISIT takes to implement our recommendations, the greater the risk that the program will not meet its stated goals and commitments. Our open recommendations are all aimed at strengthening US-VISIT program management and improving DHS’s ability to make informed US-VISIT investment decisions. With the exception of one, these recommendations are still relevant and applicable. Since we made our recommendation, facts and circumstances surrounding Increment 2B deployment and operational status have materially changed, making the collection of Increment 2B predeployment impractical. Nevertheless, the need remains to better understand the impact of US-VISIT entry capabilities on all land POEs. Until this understanding exists, the department will be challenged in its ability to accurately estimate and provide facilities and staff resource needs. To recognize both the need to fully assess the impact of US-VISIT entry capabilities on staffing levels and facilities at land POEs, as well as the current operational status of Increment 2B, we are closing our existing recommendation related to assessing the impact of Increment 2B. We recommend that the DHS Secretary direct the US-VISIT Program Director to explore alternative means of obtaining an understanding of the full impact of US-VISIT at all land POEs, including its impact on workforce levels and facilities; these alternatives should include surveying the sites that were not part of the previous assessment. In its written comments on a draft of this report, signed by the Director, Departmental GAO/OIG Liaison Office, and reprinted in appendix II, DHS stated that it agreed with many areas of the report and that our recommendations had made US-VISIT a stronger program. Further, the department stated that while it disagreed with certain areas of the report, it nevertheless concurred with the need to implement our open recommendations with all due speed and diligence. DHS commented specifically on 11 of the 18 recommendations discussed in the report. The recommendations, the department’s comments, and our responses follow: 1. Recommendation: Develop and begin implementing a system security plan, and perform a privacy impact assessment and use the results of the analysis in near-term and subsequent system acquisition decision making. DHS stated that this recommendation has been fully implemented. In support, it said that it has completed a US-VISIT security plan that is consistent with National Institute of Standards and Technology (NIST) guidance, and that it provided the plan to us in September 2004. It also stated that the security risk assessment aspect of this recommendation was established in February 2005, 20 months after we made the recommendation, and thus the age of the recommendation should be shown as 10 months rather that the 30 months cited in the report. The department also commented that there is no US-VISIT system, but rather a US-VISIT program with capabilities delivered by existing interconnected systems. According to the department, these component systems have been certified and accredited, consistent with NIST guidance, and as part of their certification and accreditation, security plans and risk assessments, as well as risk mitigation strategies, have been developed for each system. The department stated that it provided us with these system-level risk assessments, as well as system-specific action plans and milestones for implementing the mitigation strategies. In addition, the department noted that it completed a programwide risk assessment in December 2005 that specifically addresses information security issues that might not be captured in the system-specific documentation used to certify and accredit each system. In light of its system-specific certification and accreditation efforts, existing system-level risk assessments, and the program-level risk management process (see response 4 for discussion of the risk management process), DHS commented that it is inaccurate to state that US-VISIT officials are not in a position to know program risks, and the recommendation should be closed. While we agree that we received a copy of the US-VISIT security plan, dated September 2004, we do not agree that the plan satisfied all relevant federal guidance and that DHS has fully implemented our recommendation. In particular, it has not provided us with evidence that a programwide risk assessment has been done and that a security plan reflective of such an assessment exists. According to relevant guidance, a security plan should describe, among other things, the methodology that is to be used to identify system threats and vulnerabilities and to assess risks, and it should include the date the risk assessment was completed because the assessment is a necessary driver of the security controls described in the plan. As we reported in February 2005 and state in this report, the US-VISIT security plan did not include this information; further, although DHS stated in its comments that it completed this risk assessment in December 2005, this statement is contradicted by a statement elsewhere in its comments that it is still in the process of doing the assessment. In addition to this contradiction, DHS’s comments did not include any evidence to demonstrate that it has developed a complete risk assessment, such as a copy of the assessment. With regard to the age of the recommendation, we do not agree with DHS’s position that we established a new finding regarding the lack of a programwide risk assessment in our February 2005 report. Rather, as part of our analysis of actions to implement our prior recommendation to develop a security plan, which is to include information about the related security risk assessment, we observed that the plan did not indicate a date for completing a risk assessment in accordance with federal guidelines. Therefore, our position that about 30 months had passed from the time of our initial recommendation (June 2003) is accurate. With regard to the individual system-level risk assessments, we agree that we have received them. However, we do not agree that we have received the action plans and milestones cited in the comments. Regardless, we do not believe that system-level assessments are a sufficient substitute for a programwide assessment. Accordingly, our recommendation focused on the need for an integrated US-VISIT system risk assessment as part of security planning. While the system-level plans and risk assessments are relevant and useful, they neither individually nor collectively address the threats and vulnerabilities imposed as a result of these systems’ integration. By stating in its comments its commitment to having a programwide risk assessment that identifies and proposes mitigations for security risks that arise as a result of the interface and integration of the legacy systems, DHS is agreeing with our position. Moreover, without evidence that the program has completely assessed its risks, we continue to find no basis for how program officials would know the full range and degree of US-VISIT security risks. Our position in this regard has been reinforced by a recent DHS Inspector General report that identified a number of US-VISIT security risks. To further support its position that this recommendation has been fully implemented, DHS also commented that it has completed numerous privacy impact assessments and continues to update them to reflect system changes. In particular, it said that it updated the privacy impact assessment in December 2005 to reflect all increments and that it considers the assessment to be part of US-VISIT system documentation. It further commented that we appear to be unaware of privacy staff activities to review system documents and perform privacy risk assessments throughout the system life cycle. Nevertheless, the department acknowledged that its privacy work was not always noted within US-VISIT system documentation. Accordingly, DHS stated that it plans to appropriately reference all privacy requirements and privacy risk assessments in the program’s system documentation in the future. We agree that US-VISIT has developed and updated its privacy impact assessment and would note that our report states this fact. We do not agree, however, with the comment that we are not aware that the privacy staff review system documents and perform privacy risk assessments. In fact, it is because we were aware of these facts that we were careful to ensure that they were reflected in our report. The point that we are making is that privacy is not addressed in all relevant systems documentation, which DHS acknowledged in its comments. With regard to this point of agreement, we support the department’s stated plans to reference all privacy requirements and any privacy risk assessments in all relevant system documentation in the future. 2. Recommendation: Develop and implement a plan for satisfying key acquisition management controls, including acquisition planning, solicitation, requirements management, program management, contract tracking and oversight, evaluation, and transition to support, and implement the controls in accordance with SEI guidance. DHS commented that the report should reflect that US-VISIT had initially adopted Carnegie Mellon University’s Software Engineering Institute (SEI) Software Acquisition Capability Maturity Model® to guide its software-related process improvement efforts and that, in December 2004, it transitioned to SEI’s Capability Maturity Model–Integration (CMMI®). As a result, it said that the program’s process improvement strategy and plans, process development, and process appraisals are now aligned to the most applicable CMMI process areas. We agree that US-VISIT has transitioned to CMMI. We state in our report that US-VISIT has done so and that the key process areas it is addressing in its process improvement strategy and plan are consistent with those cited in our recommendation. We do not believe that this transition materially affects our recommendation, however, because even though the names of the key processes in these two models may in some cases differ, the processes and respective practices are fundamentally consistent. 3. Recommendation: Clarify the operational context in which US-VISIT is to operate. Consistent with our report, DHS commented that the operational context in which US-VISIT operates is in progress, meaning that it has yet to be fully established. For example, it said that the mission of DHS, and therefore the scope of US-VISIT activities to meet the mission, is continually expanding. Further, it acknowledged that more certainty in the operational context is desirable. In mitigation of the risks associated with not having a more stable operational context, DHS made several statements. For example, it said that the principal role of US-VISIT is to integrate information and immigration and border management systems across DHS and the State Department, and to facilitate agencies working toward a common environment that will eliminate redundancies. It also said that elements of its draft immigration and border management strategic plan are being used in current US-VISIT operations. In addition, the department said that mechanisms to mitigate the risks that we cited have been developed and are being implemented. We support DHS’s acknowledgment of the importance of having a well-defined operational context within which to define and implement US-VISIT and related border security programs. However, we do not believe that DHS’s comments provided any evidence showing that sufficient steps and activities to mitigate the associated risks have been taken or are planned. 4. Recommendation: Determine whether proposed US-VISIT increments will produce mission value commensurate with cost and risks and disclose to the Congress planned actions. DHS commented that its cost-benefit analysis (CBA) for Increment 1B conforms to relevant federal guidance, and noted that our expectations as to the scope and level of detail of analysis that should be included in the CBA document are inconsistent with its understanding of OMB Circular A-94 and DHS’s CBA workbook, which were used to guide the development of the CBA analysis. As an example, the department took exception with our statement that year-by-year benefit estimates were not reported by noting that the net present value was based on an estimate of annual benefits and costs, and that net present value could not be estimated without a year-by-year benefit analysis. The department further commented that a comprehensive uncertainty analysis was conducted because it completed a risk analysis, which is more comprehensive, rigorous, and appropriate than conducting a sensitivity analysis. In this regard, it added that the results of the risk analysis provided an indication of Increment 1B’s worthiness in light of existing uncertainty, rather than information on a specific CBA variable or another. The department further noted that it had provided some of these supporting analyses to us. DHS also stated that any investment that has a 5-year life cycle and is considered interim in nature will face considerable challenge in providing economic benefits commensurate with cost. We do not agree that the CBA fully conforms to relevant federal guidance. As our report states, for example, the analysis does not explicitly state the numerical value of the discount rate used for calculating each alternative’s net present value, and hence does not conform to OMB guidance. In addition, the cost estimates used in the analysis were not complete and reliably derived. In deriving the estimate, for example, the department did not clearly define the project’s life cycle to ensure that key factors were not overlooked and that the full cost of the program was included. (See response 10 below for more information on this point.) Last, while we agree that a year-by-year benefit analysis is a necessary component of a net present value determination, OMB nevertheless requires that the year-by-year benefit estimates be reported in the analysis to promote independent review of the estimates. Also, we do not agree that DHS performed a complete uncertainty analysis. According to OMB and DHS guidance, a complete uncertainty analysis should include both a risk analysis and a sensitivity analysis. However, the latter was not done. Thus, our point is not, as DHS comments suggest, that US-VISIT should have performed a sensitivity analysis instead of a risk analysis, but rather, that both types of analyses are necessary to completely examine investment uncertainty. 5. Recommendation: Develop and implement a risk management plan and ensure that all high risks and their status are reported regularly to the executive body. DHS commented that US-VISIT began the development and implementation of its risk management plan in 2004 immediately after we made our recommendation. It further commented that, as part of a CMMI maturity internal appraisal that it completed in July 2005, it found that the risk management process had not been consistently applied across the program. To address this, the department cited actions that it has taken to fully implement risk management, such as approving the risk management plan in September 2005; defining a risk governance structure; establishing and maintaining a risk database; and developing risk management training and providing this training to program personnel and contractors beginning in November 2005. We support the recent actions that the program cited as having been taken to strengthen risk management. However, the actions cited do not demonstrate that the risk management process is being consistently applied. Until US-VISIT fully implements its risk management plan and process, it cannot be assured that all program risks are being identified and managed in order to effectively mitigate any negative impact on the program’s ability to deliver promised capabilities on time and within budget. 6. Recommendation: Develop and approve test plans before testing begins that (1) specify the test environment; (2) describe each test to be performed, including test controls, inputs, and expected outputs; (3) define the test procedures to be followed in conducting the tests; and (4) provide traceability between test cases and the requirements to be verified by the testing. DHS stated that our report does not accurately reflect the status of the Increment 2C Phase 1 testing. In particular, it said that the issues associated with the traceability of requirements to test cases were minor and that the extent of the discrepancies is far less than what our report presents. It further stated that the discrepancies in our report are based on old traceability documentation and do not reflect revised documentation provided to us on November 9, 2005. We agree that DHS provided us with revised traceability matrixes after we had shared with them our analysis of the test plans and traceability matrixes, dated June 28, 2005, and June 27, 2005, respectively. However, the revised documentation referenced in DHS’s comments was provided in November 2005, about 4 months after testing began. This means that the test plans and traceability matrixes available at the time of testing—which are what we reviewed because they governed the scope and nature of actual testing performed—did not adequately trace between test cases and the requirements to be verified. Specifically, 300 of the 438 Increment 2C requirements, or about 70 percent, did not have specific references to test cases. 7. Recommendation: Implement effective configuration management practices, including establishing a US-VISIT change control board to manage and oversee system changes. DHS commented that a US-VISIT representative attends all configuration control board meetings for all applicable legacy component systems, and that any proposed change request from a legacy component control board that could affect US-VISIT functionality is brought to the attention of the US-VISIT Executive Configuration Control Board for consideration. We do not question these statements. However, we do not believe that they demonstrate that US-VISIT has adequate control over system changes that could affect the program. That is, they do not ensure that changes to the component systems that are initiated and approved by another DHS organization and that could affect US-VISIT performance are subject to US-VISIT configuration management and approval processes. US-VISIT could establish explicit and enforceable control over changes to the legacy systems through such mechanisms as defined and enforced memorandums of understanding among the affected DHS organizations. It was the lack of such control that prompted our recommendation. 8. Recommendation: Assess the full impact of Increment 2B on land POE workforce levels and facilities, including performing appropriate modeling exercises. The department stated that, given the imperative to meet the legislatively mandated time frames, the scope of Increment 2B was limited to only one part of POE operations—incorporating the collection of a biometric into the previously manual Form I-94 issuance process. It also stated that wait times are affected by various factors, including traffic volume, staffing levels, and availability of officers. Therefore, DHS focused the Increment 2B evaluation on just the change to this process. The department further commented that given the events since the evaluation—namely, Increment 2B full operations—it is not practical to collect and model baseline data for the 47 sites that were not part of the initial evaluation. Regarding the 3 pilot sites included in the assessment, the department stated that the sites were selected based on criteria developed from input from US-VISIT, as well as CBP operational constraints. The department further commented that the 3 sites provided a reasonable mix of travelers and they did not have other constraints that directly impacted the collection of performance data specific to Form I-94 issuance. DHS also stated that the I-94 processing times vary by POE, and therefore they are not easily generalized from one port to another. Further, the department commented that the number of workstations and officers available to operate those workstations to process applicants for a Form I-94 do not impact the time it takes to issue a Form I-94. We agree that the scope of the Increment 2B evaluation was limited to the I-94 issuance process, and that it did not address the increment’s impact on the POEs’ ability to meet other performance parameters. Our point is that the limited nature of the evaluation does not satisfy either the intent of our recommendation or DHS’s own stated purpose for the evaluation, which was to determine the effectiveness of Increment 2B performance at the 50 busiest land POEs. We also agree that the I-94 processing times vary by POE and cannot be easily generalized. It is for this reason, among others, that we questioned whether the 3 sites selected for the assessment were sufficiently representative to satisfy both our recommendation and the evaluation’s stated purpose. In addition, while we also agree that collecting pre-Increment 2B baseline data is not practical at this time, the fact remains that the operational impact of Increment 2B on workforce levels and facilities has not been adequately assessed, as evidenced by officials at 1 large POE telling us that processing times have increased and DHS’s recognition that each POE is somewhat different. In light of these new facts and circumstances, we are closing our existing recommendation and making a new recommendation to recognize the need for DHS to explore alternative means to assess the impact of US-VISIT entry capabilities at land POEs. This new recommendation will be shown as an open recommendation, and the original recommendation will be closed. 9. Recommendation: Develop a plan, including explicit tasks and milestones, for implementing all of our open recommendations and periodically report to the DHS Secretary and Under Secretary on progress in implementing this plan; and report this progress, including reasons for delays, in all future expenditure plans. DHS stated that it is untrue that 19 months had elapsed from the time we made this recommendation to the time that it assigned responsibilities to program officials for addressing each of our recommendations. In support, it commented that it issued its first plan to address our recommendations on August 18, 2003, and subsequent reports have been issued periodically that update progress in doing so. We agree that DHS has assigned responsibilities to specific individuals for addressing each recommendation. However, we have yet to be provided any evidence to support its statement that it issued the first report addressing our recommendations on August 18, 2003. Similarly, we have not received evidence showing that it has prepared a plan, including specific actions and milestones, for implementing all of our open recommendations, which is a focus of this recommendation. We would also observe that we made this recommendation in May 2004, and at that time the department stated that it agreed with the recommendation but did not indicate that it had taken any steps to address it, such as commenting that a report was issued on August 18, 2003. 10. Recommendation: Follow effective practices for estimating the costs of future increments. DHS either tacitly or explicitly agreed with our findings relative to its satisfaction of 8 of the 13 cost-estimating criteria presented in table 4 (now table 3) of our draft report. For example, it agreed that it did not clearly define the life cycle to which the cost estimate applies. It also agreed that it did not include a work breakdown structure, noting that it used the available project implementation schedule as a proxy for the activities related to the deployment of the exit alternatives. Regarding our five findings concerning its satisfaction of cost-estimating with which DHS disagreed, the department’s primary area of disagreement was with the intended purpose of the Increment 1B CBA that used the cost estimate, which it said in its comments was to inform decision makers about the relative worthiness of each of the three exit alternatives considered for deployment. Hence, DHS stated that the purpose of the CBA was to analyze only the costs associated with deploying an operational solution, not to analyze the costs and benefits of both developing and deploying alternative solutions. DHS further stated that the CBA thus includes only those costs to be incurred in deploying a selected alternative, and it does not include costs already incurred in developing system alternatives (i.e., sunk costs). It further commented that DHS guidance states that sunk costs are not relevant to the current investment analysis because “only current decisions can affect the future consequences of investment alternatives.” DHS also disagreed that the cost estimate in the CBA should have included nonrecurring development costs, and commented that it did appropriately size the task described in the cost estimates for each alternative exit solution, noting that sizing metrics related to software development were not relevant to deployment of the alternatives because development activities had already occurred and therefore are sunk costs. The department added that those sizing metrics that are relevant to the cost estimate are discussed in the CBA, as are the cost estimating parameters (i.e., those associated with deployment and not those associated with development and testing). In addition, DHS disagreed that DHS’s cost estimate excluded important cost categories, such as system testing, and stated that the estimate addresses labor, facilities, operations and maintenance, information technology, travel, and training costs. Once again, DHS emphasized that since the focus of the CBA was on operational deployment and not system design and development, system testing costs were not included because they were not considered relevant. DHS also reiterated its early point that the uncertainty analysis that it conducted was comprehensive. We agree that actual sunk costs should not be included in a CBA cost estimate. However, we disagree that the cost categories that DHS cited as not relevant are only costs that are associated with predeployment activities. Testing, for example, is an activity that is normally performed before, during, and following deployment, and thus the associated costs would be relevant to the stated purpose of the Increment 1B CBA. However, a testing cost category was missing from the CBA cost estimate, as was a cost category for software maintenance. Regarding DHS’s statement that it conducted a complete uncertainty analysis, we reiterate our previous point that a complete uncertainty analysis should include both a risk analysis and a sensitivity analysis, and the CBA did not include the latter. 11. Recommendation: Reassess plans for deploying an exit capability to ensure that the scope of the exit pilot provides for adequate evaluation of alternative solutions and better ensures that the exit solution selected is in the best interest of the program. Concerning the questions we raised about the adequacy of the exit pilots in light of the 24 percent compliance rate, DHS commented that we failed to consider the compliance rate of the previous exit pilot program, the National Security Entry Exit Registration System (NSEERS), which, according to DHS, had a 75 percent compliance rate. DHS added that NSEERS achieved this compliance rate with a very limited number of exit locations, and therefore, any of the three US-VISIT exit alternatives would have at least a 75 percent compliance rate once national deployment was completed. Further, the department commented that Immigration and Customs Enforcement (ICE) had recently conducted enforcement operations at the Denver International Airport, and that the compliance rate during these operations increased from 30 percent to over 90 percent. It then concluded that the combined results of the exit pilot evaluation, the NSEERS pilot, and the ICE enforcement activities at the Denver International Airport lead it to believe that the US-VISIT exit alternatives have been adequately evaluated. We do not agree with this conclusion because it is based on unsupported assumptions. Specifically, DHS did not provide any evidence to support its claim that that US-VISIT would achieve a comparable compliance rate to the NSEERS program. Moreover, even if DHS could achieve a 75 percent compliance rate for US-VISIT exit,that still means that 25 percent of eligible persons would not be complying with the US-VISIT exit process. Further, DHS did not provide any information about the recent enforcement actions conducted by ICE, nor did it provide any evidence that this is a practical and viable option for the US-VISIT exit solution. While we agree that enforcement actions may indeed increase the exit compliance rate, DHS has not yet assessed the impact of such a solution on the US-VISIT exit process. Further, the US-VISIT program director acknowledged the need to evaluate the impact of implementing potential enforcement actions on US-VISIT exit and planned to do so. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate and House Appropriations Committees, as well as to the Chairmen and Ranking Minority Members of other Senate and House committees that have authorization and oversight responsibilities for homeland security. We are also sending copies to the Secretary of Homeland Security, Secretary of State, and the Director of OMB. Copies of this report will also be available at no charge on our Web site at www.gao.gov. Should you or your offices have any questions on matters discussed in this report, please contact me at (202) 512-3439 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. Our objective was to determine the progress of the Department of Homeland Security (DHS) in implementing 18 of our recommendations pertaining to the U.S. Visitor and Immigrant Status Indicator Technology (US-VISIT) program. To accomplish this objective, we reviewed and analyzed US-VISIT’s most recent status reports on the implementation of our open recommendations and related key documents, augmented as appropriate by interviews with program officials. More specifically, we analyzed relevant systems acquisition documentation, including the program’s process improvement plan, risk management plan, and configuration management plan. We also analyzed the US-VISIT security plan, privacy impact assessment, cost-benefit analysis, cost estimates, test plans, human capital plans, and related evaluations and assessments. In performing our analyses, we compared available documentation and program officials’ statements with relevant federal guidance and associated best practices. A more detailed description of our scope and methodology relative to the cost-benefit analysis, cost estimates, and test plans follows: Our analysis of the cost-benefit analysis focused on Increment 1B because this was the latest cost-benefit analysis and cost estimate prepared. In doing this analysis, we compared the US-VISIT cost-benefit analysis to eight criteria in Office of Management and Budget (OMB) guidance. Our analysis of the cost estimate also focused on Increment 1B for the same reason previously cited. In doing this analysis, we compared the estimate to 13 criteria from the Software Engineering Institute that we have previously reported to be the minimum set of actions needed to develop a reliable cost estimate. We then determined whether the criteria were satisfied, partially satisfied, or not satisfied using the definitions given below. Our analysis of the test plans focused on Increment 2C because it is the most recently tested increment. This analysis included determining the extent to which the test plans for this increment met 4 key criteria that we have previously reported as essential to effective test plans. In doing this analysis, we examined Increment 2C systems documentation, including business and functional requirements and traceability matrixes. We also independently traced 58 business requirements and 438 functional requirements to the test cases in the test plan. Further, we independently traced all test cases to the requirements to determine consistency. In performing our work, we used the following categories and definitions in deciding the extent to which each recommendation had been implemented. Specifically, we considered a recommendation completely implemented when documentation demonstrated that it had partially implemented when documentation indicated that actions were under way to implement it, and in progress when documentation indicated that action had been initiated to implement it. These categories and definitions are consistent with those used in our prior US-VISIT reports. In determining the amount of time it has taken to implement actions on our recommendations, we calculated the time from the date the report was issued through December 2005. We conducted our audit work at the US-VISIT program office in Rosslyn, Virginia, from August 2005 through December 2005, in accordance with generally accepted government auditing standards. US-VISIT involves complex processes governing the stages of a traveler’s visit to the United States (pre-entry, entry, status, and exit) and analysis of hundreds of millions of foreign national travelers at over 300 air, sea, and land ports of entry (POE). A simplified depiction of these processes is shown in figure 4. Pre-entry processing begins with initial petitions for visas, grants of visa status, or the issuance of travel documentation. When a foreign national applies for a visa at a U.S. consulate, biographic and biometric data are collected and shared with border management agencies. The biometric data are transmitted from the Department of State to DHS, where the prints are run against the Automated Biometric Identification System (IDENT) database to verify identity and to run a check against the biometric watch list. The results of the biometric check are transmitted back to State. A “hit” response prevents State’s system from printing a visa for the applicant until the information is reviewed and cleared by a consular officer. Pre-entry also includes transmission by commercial air and sea carriers of crew and passenger manifests to appropriate immigration officers before these carriers arrive in the United States. These manifests are transmitted through the Advanced Passenger Information System (APIS). The APIS lists are run against the biographic lookout system to identify those arrivals for whom biometric data are available. In addition, POEs review the APIS list in order to identify foreign nationals who need to be scrutinized more closely. When a foreign national arrives at a POE’s primary (air and sea) or secondary (land) inspection booth, the inspector, using a document reader, scans the machine-readable travel documents. APIS returns any existing records on the foreign national to the US-VISIT workstation screen, including manifest data matches and biographic lookout hits. When a match is found in the manifest data, the foreign national’s name is highlighted and outlined on the manifest data portion of the screen. Biographic information, such as name and date of birth, is displayed on the bottom half of the computer screen, along with a photograph obtained from State’s Consular Consolidated Database. The inspector at the booth scans the foreign national’s fingerprints (left and right index fingers) and takes a digital photograph. This information is forwarded to the IDENT database, where it is checked against stored fingerprints in the IDENT lookout database. If the foreign national’s fingerprints are already in IDENT, the system performs a match (a comparison of the fingerprint taken during the primary inspection to the one on file) to confirm that the person submitting the fingerprints is the person on file. If no prints are currently in IDENT, the foreign national is enrolled in US-VISIT (i.e., biographic and biometric data are entered into IDENT). During this process, the inspector also questions the foreign national about the purpose of his or her travel and length of stay. The inspector adds the class of admission and duration of stay information into the Treasury Enforcement Communications Systems, and stamps the “admit until” date on the Form I-94. If the foreign national is ultimately determined to be inadmissible, the person is detained, lookouts are posted in the databases, and appropriate actions are taken. The status management process manages the foreign national’s temporary presence in the United States, including the adjudication of benefits applications and investigations into possible violations of immigration regulations. As part of this process, commercial air and sea carriers transmit departure manifests electronically for each departing passenger. These manifests are transmitted through APIS and shared with the Arrival Departure Information System (ADIS). ADIS matches entry and exit manifest data (i.e., each record showing a foreign national entering the United States is matched with a record showing the foreign national exiting the United States). ADIS also receives status information from the Computer Linked Application Information Management System and the Student Exchange Visitor Information System on foreign nationals. The exit process includes the carriers’ submission of electronic manifest data to APIS. This biographic information is transmitted to ADIS, where it is matched against entry information. At the 11 POEs where the exit solution is being implemented, the departure is processed by one of three exit methods. Within each port, one or more of the exit methods may be used. The three methods are as follows: Kiosk: At the kiosk, the traveler, guided by a workstation attendant if needed, scans the machine-readable travel documents, provides electronic fingerprints, and has a digital photograph taken. A receipt is printed to provide documentation of compliance with the exit process and to assist in compliance on the traveler’s next attempted entry to the country. After the receipt prints, the traveler proceeds to his or her departure gate. At the conclusion of the transaction, the collected information is transmitted to IDENT. Mobile device: At the departure gate, and just before the traveler boards the departure craft, either a workstation attendant or law enforcement officer scans the machine-readable travel documents, scans the traveler’s fingerprints (right and left index fingers), and takes a digital photograph. A receipt is printed to provide documentation of compliance with the exit process and to assist in compliance on the traveler’s next attempted entry to the country. The device wirelessly transmits the captured data in real time to IDENT via the Transportation Security Administration’s Data Operations Center. If the device is being operated by a workstation attendant, he or she provides a printed receipt to the traveler, and the traveler then boards the departure craft. If the mobile device is being operated by a law enforcement officer, the captured biographic and biometric information is checked in near real time against watch lists. Any potential match is returned to the device and displayed visually for the officer. If no match is found, the traveler is allowed to board the departure craft. Validator: Using a kiosk, the traveler, guided by a workstation attendant if needed, scans the machine-readable travel documents, provides electronic fingerprints, and has a digital photograph taken. As with the kiosk, a receipt is printed to provide documentation of compliance with the exit process and to assist in compliance on the traveler’s next attempted entry to the country. However, this receipt has biometrics (i.e., the traveler’s fingerprints and photograph) embedded on the receipt. At the conclusion of the transaction, the collected information is transmitted to IDENT. The traveler presents his or her receipt to the attendant or law enforcement officer at the gate or departure area, who scans the receipt using a mobile device. The traveler’s identity is verified against the biometric data embedded on the receipt. Once the traveler’s identity is verified, he or she is allowed to board the departure craft. The captured data are not transmitted in real time back to IDENT. Instead, the data are periodically uploaded through the kiosk to IDENT. An analysis capability is to provide for the continuous screening against watch lists of individuals enrolled in US-VISIT for appropriate reporting and action. As more entry and exit information becomes available, it is to be used for analysis of traffic volume and patterns as well as for risk assessments. The analysis is also to be used to support resource and staffing projections across POEs, strategic planning for integrated border management analysis performed by the intelligence community, and determination of travel use levels and expedited traveler programs. In addition to the contact named above, the following people made key contributions to this report: Deborah Davis, Assistant Director; Hal Brumm; Tonia Brown; Joanna Chan; Barbara Collier; Neil Doherty; Jennifer Echard; James Houtz; Scott Pettis; Karen Richey; and Karl Seifert. | The Department of Homeland Security (DHS) has established a program--the U.S. Visitor and Immigrant Status Indicator Technology (US-VISIT)--to collect, maintain, and share information, including biometric identifiers, on selected foreign nationals entering and exiting the United States. US-VISIT uses these identifiers (digital fingerscans and photographs) to screen persons against watch lists and to verify that a visitor is the person who was issued a visa or other travel document. Visitors are also to confirm their departure by having their visas or passports scanned and undergoing fingerscanning at selected air and sea ports of entry (POE). GAO has made many recommendations to improve the program, all of which DHS has agreed to implement. GAO was asked to report on DHS's progress in responding to 18 of these recommendations. The current status of DHS's implementation of the 18 recommendations is mixed, but progress in critical areas has been slow. DHS has implemented 2 of the recommendations: it defined program staff positions, roles, and responsibilities, and it hired an independent verification and validation contractor. It has also taken steps to implement the other recommendations, partially completing 11 and beginning to implement another 5. In September 2003, GAO reported that the program had not assessed the costs and benefits of Increment 1 (which provides entry capabilities to air and sea POEs) and recommended that the program determine whether proposed increments will produce mission value commensurate with cost. In the latest cost-benefit analysis, dated June 23, 2005, the program identified potential costs and benefits for three alternatives for an air and sea exit solution. However, the analysis does not meet key Office of Management and Budget criteria; for example, it does not include a complete uncertainty analysis, which helps to provide decision makers with perspective on the potential variability of the cost and benefit estimates should circumstances change. GAO reported in May 2004 and February 2005 that system testing was not based on well-defined test plans and recommended that before testing begins, the program develop and approve test plans meeting certain criteria. However, although the latest test plan did cover many required areas (such as the tests to be performed), it did not adequately trace between test cases and the requirements to be verified by testing. Without complete and traceable test plans, the risk is increased that the deployed system will not perform as intended. In May 2004, GAO reported that the program had not assessed its workforce and facility needs for Increment 2B (which extends entry capabilities to the 50 busiest land POEs) and recommended that it do so. Since then, the program evaluated the processing times to issue and process entry/exit forms at 3 of the 50 busiest POEs and concluded that the results showed that no additional staff and only minor facilities modifications were required. However, the scope of the evaluation was limited. Since then, DHS has deployed and implemented Increment 2B capabilities to these 50 POEs, making the collection of predeployment baseline data for these sites impractical. Nonetheless, other alternatives, such as surveying site officials about the increment's impacts, have yet to be explored. Until they are, the program may not be able to accurately project resource needs or make any needed modifications to achieve its goals of minimizing US-VISIT's impact on POE operations, which was the impetus for GAO's recommendation. DHS attributed the pace of progress to competing demands on time and resources. The longer that US-VISIT takes to implement the recommendations, the greater the risk that the program will not meet its stated goals on time and within budget. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
DLA is the Department of Defense’s (DOD) logistics manager for all DOD consumable items and some department repair items. Its primary business function is to provide supply support in order to sustain military operations and readiness. In addition to this primary function, which DLA refers to as either “materiel management” or “supply-chain management,” DLA performs five other major business functions: distributing materiel ordered from its inventory; purchasing fuels for DOD and the U.S. government; storing strategic materiel; marketing surplus DOD materiel for reuse and disposal; and providing numerous information services, such as item cataloging, for DOD, the United States, and selected foreign governments. DLA consists of a central command authority supported by a number of field commands that manage the agency’s six business functions. Until about 1997, DLA generally developed its systems in-house. Since then, the agency has begun to acquire systems, relying on contractors for system development and managing the acquisition of these systems. Currently, DLA is in the process of acquiring two systems: Business Systems Modernization (BSM) and Fuels Automated System (FAS). BSM is intended to modernize DLA’s materiel management business function, changing the agency from being solely a provider and manager of physical inventory to being a manager of supply chains. In this role, DLA would link customers with appropriate suppliers and track physical and financial business practices. It is planning to replace two large legacy systems, as well as several supporting programs, that are more than 30 years old and are not integrated. BSM is based on commercially available software products. DLA plans to acquire and deploy its BSM system solution through a series of four system releases/increments. First, it plans to demonstrate successful application of its new concept of doing business for selected commodities—namely, earth-moving equipment, medical/pharmaceutical supplies, and F/A-18 engine components—at the three Defense Supply Centers. If this first release is successfully demonstrated, DLA plans to expand the system solution to other commodities in three additional increments. DLA plans to invest approximately $658 million to acquire and implement BSM from fiscal years 2000 through 2005. FAS is intended to help the Defense Energy Support Center manage about $5 billion in contracts with petroleum suppliers each year. FAS is to be a multifunctional system that provides for, among other things, point-of-sale data collection inventory control, finance and accounting, procurement, and facilities management. FAS, which relies on a commercially available software package, is being fielded incrementally. Increment 1 is the base- level operational module that is currently being deployed to base-level sites worldwide. The second increment is the enterprise-level system, which is to be deployed to its direct delivery commodity business unit. DLA plans to invest $293 million in FAS from fiscal year 1995 through 2002. SEI’s SA-CMM is used to measure an organization’s capability to manage the acquisition of software. SEI’s expertise in, and model and methods for, determining software process assessment are recognized and accepted throughout the software industry. The model defines five levels of software acquisition maturity. Each level of maturity (except level 1) indicates process capability in relation to key process areas. For a maturity level to be achieved, all key process areas related to that level must be implemented effectively. The second level of process maturity, level 2 (referred to as the repeatable level), demonstrates that basic management processes are established to track performance, cost, and schedule, and the necessary discipline is in place to repeat earlier successes on similar projects. Organizations that do not effectively implement all key process areas for the repeatable level are, by default, at level 1, the initial level of maturity. Level-1 processes can be described as immature, ad hoc, and sometimes chaotic; success in software acquisition for these organizations depends on the ability and commitment of the staff involved. Figure 1 further explains the five-level software acquisition model. We evaluated DLA against six of the seven level-2 (repeatable) key process areas in the SA-CMM. We did not evaluate DLA on the seventh key process area—transition to support—because the contractors who are implementing BSM and FAS will support these systems when they are operational, rendering transition to support irrelevant for these acquisitions. We evaluated DLA against one level-3 (defined) key process area—acquisition risk management—because many software acquisition experts consider it to be one of the most important key process areas. These key process areas are described in table 1. As established by the model, each key process area contains five common features—commitment to perform, ability to perform, activities to be performed, measurement and analysis of activities, and verification of activities’ implementation. These five features collectively provide a framework for the implementation and institutionalization of the key process areas. The common feature definitions are as follows: Commitment to perform: This feature describes the actions that the organization takes to establish the process and ensure that it can endure. Key practices typically involve establishing organizational policies and sponsorship. Ability to perform: This feature describes the preconditions that must exist in the project or organization to implement the software acquisition process competently. Key practices typically include assigning responsibility and providing training. Activities to be performed: This feature describes the roles and procedures necessary to implement a key process area. Key practices typically involve establishing plans and procedures, performing the work, tracking it, and taking appropriate management actions. Measurement and analysis of activities: This feature describes the steps necessary to measure progress and analyze the measurements. Key practices typically involve defining the measurements to be taken and the analyses to be conducted to determine the status and effectiveness of the activities performed. Verification of activities’ implementation: This feature describes the steps the organization must take to ensure that project activities are performed in accordance with established processes. Key practices typically involve regular reviews by management. Each common feature consists of a number of key practices—specific actions such as developing an organizational policy for software acquisition, developing various plans for software acquisition activities, and tracking a contractor’s progress. When an organization is evaluated against the SA-CMM, comparisons of actual performance against a key practice can result in one of four possible outcomes or ratings: Strength: The key practice involved was effectively implemented. Weakness: The key practice was not effectively implemented or was not implemented. Observation: The key practice was evaluated, but cannot be characterized as a strength because (1) the project team did not provide sufficient evidence to support a strength rating or (2) the key practice was only partially performed. Not rated: The key practice is not relevant to the project. To achieve the repeatable level, DLA would have to demonstrate that the key practices related to this level were implemented effectively in the software acquisition projects being evaluated, and thus the project successes can be repeated in future projects. DLA is not at level 2 (the repeatable level of maturity) when compared with the SA-CMM—meaning that DLA does not possess an agencywide or corporate ability to effectively acquire software-intensive systems. Whereas DLA’s BSM project fully or substantially satisfied SEI’s SA-CMM requirements for the key process areas for level 2, as well as requirements for one level 3 (defined level) key process area, its FAS project did not satisfy all the criteria for any of these key process areas. A discussion of how each system compared with the SA-CMM is summarized below. BSM completely satisfied requirements for three of the level-2 key process areas, as well as for the one level-3 key process area, and substantially satisfied requirements for the remaining three level-2 key process areas that we evaluated. (See table 2 for the percentage of strengths and weakness for each area evaluated.) According to BSM officials, satisfying the criteria for the key process areas is attributable to the following factors: allocating adequate resources; following good program management practices, as defined in DOD Directive 5000; and working closely with relevant oversight groups. To address those few weaknesses that we identified, project officials told us that they have initiated corrective action. BSM satisfied all key practices in software acquisition planning, such as (1) having a written software acquisition policy, (2) having adequate resources for software acquisition planning activities, (3) developing and documenting the software acquisition strategy and plan, and (4) making and using measurements to determine the status of software acquisition planning activities. project management, including (1) designating responsibility for project management, (2) having a written policy for the management of the software project, (3) having adequate resources for the duration of the software acquisition project, and (4) tracking the risks associated with cost, schedule, resources, and the technical aspects of the project. contract tracking and oversight, including (1) designating responsibility for contract tracking and oversight, (2) including contract specialists in the project team, and (3) having a documented plan for contract tracking and oversight. acquisition risk management, such as (1) having a risk management plan, (2) having a written policy for the management of software acquisition risk, and (3) measuring and reporting on the status of acquisition risk management activities to management. BSM also satisfied all but one key practice in solicitation. Strengths included (1) designating responsibility for the software portion of the solicitation, (2) preparing cost and schedule estimates for the software products and services being acquired, and (3) having an independent review of cost and schedule estimates for the software products and services being acquired. BSM’s one weakness in this key process area was in not having a written policy for the software portion of the solicitation. This is significant because, according to the SEI, an institutional policy provides for establishing an enduring process. BSM also satisfied all but three key practices in requirements development and management. Strengths included (1) having a written policy for managing the software-related contractual requirements, (2) having a group that is responsible for performing requirements development and management activities, and (3) measuring and reporting to management on the status of requirements development and management activities. One of the three weaknesses was the lack of a documented requirements development and management plan. Such a plan provides a roadmap for completing important requirements development and management activities. Without it, projects risk either not performing important tasks or not performing them effectively. The other two weaknesses involved the project office’s appraisal of system requirements changes. Specifically, BSM did not appraise (1) requests to change system requirements for their impact on the software being acquired or (2) all changes to the requirements for impact on performance and contract schedule and cost. These activities are critical to making informed, risk-based decisions about whether to approve requirements changes. Last, BSM satisfied all but one key practice in evaluation, and we do not view that practice as significant. Strengths included (1) designating responsibility for contract tracking and oversight, (2) documenting evaluation plans and conducting evaluation activities in accordance with the plan, and (3) developing and managing evaluation requirements in conjunction with developing software technical requirements. By generally satisfying these key process areas for its BSM project, DLA has increased the chances that the software acquired on this project will meet stated requirements and will be delivered on time and within budget. See appendix II for more detailed information on key process areas and our findings on BSM. Because of the number and severity of its key practice weaknesses, FAS did not fully satisfy all the criteria for any of the five level-2 SA-CMM key process areas or for the one level-3 key process area that we evaluated.(See table 3 for the percentage of strengths and weakness for each area evaluated.) According to FAS officials, these weaknesses are attributable to a lack of adequate resources for the process areas. However, these officials stated that they are currently in the process of reorganizing and addressing resource shortages. In the software-acquisition–planning key process area, FAS had 12 strengths, 2 weaknesses, and 1 observation. Strengths included, among other things, (1) having a written software acquisition policy, (2) developing and documenting the software acquisition strategy and plan, and (3) having management review software-acquisition–planning activities. Weaknesses included (1) not having adequate resources for software-acquisition–planning activities and (2) not measuring the status of the software-acquisition–planning activities and resultant products. The weaknesses are significant because they could prevent management from developing effective plans, from being aware of problems in meeting planned commitments, or from taking necessary corrective actions expeditiously. In the requirements development and management key process area, FAS had six strengths, six weaknesses, and two observations. Examples of strengths included (1) having a written policy for managing the software- related contractual requirements and (2) having a group that is responsible for performing requirements development and management activities. However, we found weaknesses in important key practices that jeopardize effective control of the requirements baseline and can result in software products that do not meet cost, schedule, or performance objectives. Specific examples of weaknesses included (1) not having a documented requirements development and management plan, (2) not appraising requests to change system requirements for their impact on the software being acquired, (3) not appraising changes to the software-related contractual requirements for their impact on performance and contract schedule and cost, and (4) not measuring and reporting to management on the status of requirements development and management activities. In the project management key process area, FAS had 10 strengths and 6 weaknesses. Strengths included, among other things, (1) designating responsibility for project management, (2) having a written policy for the management of the software project, and (3) using a corrective action system for identifying, recording, tracking, and correcting problems. Examples of weaknesses included (1) not having adequate resources for the duration of the software acquisition project, (2) not documenting the roles, responsibilities, and authority for the project functions, and (3) not tracking the risks associated with cost, schedule, and resources. These weaknesses are significant because they could jeopardize the project’s ability to ensure that important project management and contractor activities are defined, understood, and completed. FAS had 11 strengths, 5 weaknesses, and 1 observation in the contract tracking and oversight key process area. Strengths included, among other things, (1) designating responsibility for contract tracking and oversight, (2) including contract specialists on the project team, and (3) ensuring that individuals performing contract tracking and oversight activities had experience or received training. Examples of weaknesses included (1) not having a documented plan for contract tracking and oversight and (2) not comparing the actual cost and schedule of the contractor’s software engineering effort with planned schedules and budgets. Because of these weaknesses, FAS contractor tracking and oversight activities are undisciplined and unstructured, thereby increasing the chances of FAS software acquisitions being late, costing more than expected, and not performing as intended. In the evaluation key process area, FAS had nine strengths, two weaknesses, two observations, and two areas that were not rated. Strengths included, among other things, (1) designating responsibility for planning, managing, and performing evaluation activities, (2) documenting evaluation plans and conducting evaluation activities in accordance with the plan, and (3) developing and managing evaluation requirements in conjunction with developing software technical requirements. Weaknesses were (1) not ensuring that adequate resources were provided for evaluating activities and (2) not measuring and reporting on the status of evaluation activities to management. These weaknesses are significant because they preclude DLA decisionmakers from knowing whether contractor-developed software is meeting defined requirements. FAS performed poorly in the one level-3 key process area that we evaluated—acquisition risk management—with 3 strengths, 11 weaknesses, and 1 observation. Examples of strengths included (1) having a written policy for the management of software acquisition risk and (2) designating responsibility for software acquisition risk activities. Weaknesses included, among others, (1) not having adequate resources for performing risk management activities, (2) not having a software risk management plan, and (3) not measuring and reporting on the status of acquisition risk management activities to management. Because of these weaknesses, the project office does not have adequate assurance that it will promptly identify risks and effectively mitigate them before they become problems. By not satisfying any of these key process areas for its FAS project, DLA is unnecessarily increasing the risk that the software acquired on this project will not meet stated requirements and will not be delivered on time and within budget. Appendix III provides more details on the key process areas and our findings on FAS. The quality of the processes involved in developing, acquiring, and engineering software and systems has a significant effect on the quality of the resulting products. Accordingly, process improvement programs can increase product quality and decrease product costs. Public and private organizations have reported significant returns on investment through such process improvement programs. In particular, SEI has published reports of benefits realized through process improvement programs. For example, SEI reported in 1995 that a major defense contractor had implemented a process improvement program in 1988 and by 1995 had reduced its re-work costs from about 40 percent of project cost to about 10 percent, increased staff productivity by about 170 percent, and reduced defects by about 75 percent. According to a 1999 SEI report, a software development contractor reduced its average deviation from estimated schedule time from 112 percent to 5 percent between 1988 and 1996. During the same period, SEI reported that this contractor reduced its average deviation from estimated cost from 87 percent to minus 4 percent. DLA does not currently have a software process improvement program, and recent efforts to establish one have not made much progress. We recently reported on DOD’s software process improvement efforts, including those within DLA. Specifically, we reported that before 1998, DLA had a software process improvement program; however, DLA eliminated it during a reorganization in 1998. In response to our report, DLA’s Chief Information Officer said that the software process improvement program was to be reestablished during fiscal year 2001 and that DLA’s goal would be for its system developers and acquirers to reach a level 2 on the CMM by fiscal year 2002. To date, DLA has established an integrated product team for software process improvement that is tasked to study DLA’s software processes and, based on this study, to make recommendations on areas in which DLA needs to improve. DLA has also dropped its goal of achieving level 2 by 2002, and it does not intend to specify a CMM level for its contractors. The software process improvement team has produced several draft papers and a draft policy, but it does not have a plan or milestones for achieving software process improvement. According to an agency official associated with DLA’s process improvement effort, funding to develop and implement a software process improvement program has not been approved because of other agency IT funding priorities, such as BSM. DLA does not have the institutional management capabilities necessary for effectively acquiring quality software repeatedly on one project after another. This lack of agencywide consistency in software acquisition management controls means that software project success at DLA currently depends more on the individuals assigned to a given project than on the rules governing how any assigned individuals will function. That has proven to be a risky way to manage software-intensive acquisitions. To DLA’s benefit, it currently has a model software acquisition project (BSM) that, albeit not perfect, is a solid example from which to leverage lessons learned and replicate effective software acquisition practices across the agency. To do so effectively, however, DLA will need to implement a formal software process improvement program and devote adequate resources to correct the weaknesses in the software acquisition processes discussed in this report. It will also have to commit the resources needed to implement a software process improvement program. To reduce the software acquisition risks associated with its two ongoing acquisition projects, we recommend that the Secretary of Defense direct the Director of DLA to immediately correct each BSM and FAS software- acquisition–practice weakness identified in this report. To ensure that DLA has in place the necessary process controls to acquire quality software consistently on future acquisition projects, we recommend that the Secretary also direct the DLA Director to issue a policy requiring that (1) DLA software-intensive acquisition projects satisfy all applicable SEI SA-CMM level-2 key process areas and the level-3 risk management key process area and (2) DLA software contractors have comparable software process maturity levels; and direct the Chief Information Officer (CIO) to establish and sustain a software process improvement program, including (1) developing and implementing a software process improvement plan that specifies measurable goals and milestones, (2) providing adequate resources to the program, and (3) reporting to the Director every 6 months on progress against plans. DOD provided what it termed “official oral comments” from the Deputy Under Secretary for Logistics and Materiel Readiness on a draft of this report. In its comments, DOD stated that it generally concurred with the report and concurred with the recommendations. In particular, DOD stated that it will issue policy directives requiring the Director of DLA to (1) correct identified software acquisition practice weaknesses, except in circumstances in which corrections to past events make doing so impractical; (2) implement a plan in all software-intensive projects to satisfy all applicable SEI SA-CMM level-2 and level-3 key process areas, and require all DLA software contractors to have comparable software process maturity levels; and (3) establish and sustain a software process improvement program that includes a plan specifying measurable goals and milestones, provides adequate resources, and reports to the Director of DLA every 6 months on progress against the plan. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate Appropriations Subcommittee on Defense; the Subcommittee on Readiness and Management Support, Senate Committee on Armed Services; the House Appropriations Subcommittee on Defense; and the Subcommittee on Readiness, House Committee on Armed Services. We are also sending copies to the Director, Office of Management and Budget; the Under Secretary of Defense for Acquisition and Technology; the Deputy Under Secretary of Defense for Logistics and Materiel Readiness; and the Director, Defense Logistics Agency. Copies will be made available to others upon request. If you have any questions regarding this report, please contact me at (202) 512-3439 or by e-mail at [email protected]. An additional GAO contact and staff acknowledgements are listed in appendix IV. Our objectives were to determine (1) whether the Defense Logistics Agency (DLA) has the effective software acquisition processes necessary to modernize and maintain systems and (2) what actions DLA has planned or in place to improve these processes. To determine whether DLA has effective software acquisition processes, we applied the Software Engineering Institute’s (SEI) Software Acquisition Capability Maturity Model using our SEI-trained analysts. We focused on the key process areas necessary to obtain a repeatable level of maturity, the second level of SEI’s five-level model. We also evaluated against one level-3 key process area—acquisition risk management—because of its importance. We met with project managers and project team members to determine whether and to what extent they implemented each key practice, and to obtain relevant documentation. In accordance with the SEI model, for each key process area we reviewed, we evaluated DLA’s institutional policies and practices and compared project-specific guidance and practices against the required key practices. More specifically, for each key practice we reviewed, we compared project-specific documentation and practices against the criteria in the software acquisition model. If the project met the criteria for the key practice reviewed, we rated it as a strength. If the project did not meet the criteria for the key practice reviewed, we rated it as a weakness. If the evidence was mixed or inconclusive and did not support a rating of either a strength or a weakness, we treated it as an observation. If the key practice was not relevant to the project, we did not rate it. We evaluated DLA’s only two software acquisition projects underway at the time of our review: the Business Systems Modernization (BSM) and the Fuels Automated System (FAS). To determine what actions DLA has planned or in place to improve its software processes, we identified the group within DLA that is tasked with performing this function. We interviewed agency officials who are involved in software process improvement, collected data, and analyzed draft policies and draft working papers describing planned work. We performed our work from May through October 2001, in accordance with generally accepted government auditing standards. In addition to the individual named above, key contributors to this report were Suzanne Burns, Yvette Banks, Niti Bery, Sophia Harrison, Madhav Panwar, and Teresa Tucker. | The Defense Logistics Agency (DLA) plays a critical role in supporting America's military forces worldwide. DLA relies on software-intensive systems to support its work. An important determinant of the quality of software-intensive systems, and thus DLA's mission performance, is the quality of the processes used to acquire these systems. DLA lacks mature software acquisition processes across the agency, as seen in the wide disparity in the rigor and discipline of processes between the two systems GAO evaluated. DLA also lacks a software process improvement program to effectively strengthen its corporate software acquisition processes. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Each year, OMB and federal agencies work together to determine how much the government plans to spend on IT investments and how these funds are to be allocated. In fiscal year 2011, government IT spending reported to OMB totaled approximately $79 billion. OMB plays a key role in helping federal agencies manage their investments by working with them to better plan, justify, and determine how much they need to spend on projects and how to manage approved projects. To assist agencies in managing their investments, Congress enacted the Clinger-Cohen Act of 1996, which requires OMB to establish processes to analyze, track, and evaluate the risks and results of major capital investments in information systems made by federal agencies and report to Congress on the net program performance benefits achieved as a result of these investments. Further, the act places responsibility for managing investments with the heads of agencies and establishes chief information officers (CIO) to advise and assist agency heads in carrying out this responsibility. The Clinger-Cohen Act strengthened the requirements of the Paperwork Reduction Act of 1995, which established agency responsibility for maximizing value and assessing and managing the risks of major information systems initiatives. The Paperwork Reduction Act also requires that OMB develop and oversee policies, principles, standards, and guidelines for federal agency IT functions, including periodic evaluations of major information systems. Another key law is the E-Government Act of 2002, which requires OMB to report annually to Congress on the status of e-government. In these reports, referred to as Implementation of the E-Government Act reports, OMB is to describe the administration’s use of e-government principles to improve government performance and the delivery of information and services to the public. To help carry out its oversight role, in 2003, OMB established the Management Watch List, which included mission-critical projects that needed to improve performance measures, project management, IT security, or overall justification for inclusion in the federal budget. Further, in August 2005, OMB established a High-Risk List, which consisted of projects identified by federal agencies, with the assistance of OMB, as requiring special attention from oversight authorities and the highest levels of agency management. Over the past several years, we have reported and testified on OMB’s initiatives to highlight troubled IT projects, justify investments, and use project management tools. We have made multiple recommendations to OMB and federal agencies to improve these initiatives to further enhance the oversight and transparency of federal projects. Among other things, we recommended that OMB develop a central list of projects and their deficiencies and analyze that list to develop governmentwide and agency assessments of the progress and risks of the investments, identifying opportunities for continued improvement. In addition, in 2006 we also recommended that OMB develop a single aggregate list of high-risk projects and their deficiencies and use that list to report to Congress on progress made in correcting high-risk problems. As a result, OMB started publicly releasing aggregate data on its Management Watch List and disclosing the projects’ deficiencies. Furthermore, OMB issued governmentwide and agency assessments of the projects on the Management Watch List and identified risks and opportunities for improvement, including in the areas of risk management and security. More recently, to further improve the transparency and oversight of agencies’ IT investments, in June 2009, OMB publicly deployed a website, known as the IT Dashboard, which replaced the Management Watch List and High-Risk List. It displays federal agencies’ cost, schedule, and performance data for the approximately 800 major federal IT investments at 27 federal agencies. According to OMB, these data are intended to provide a near-real-time perspective on the performance of these investments, as well as a historical perspective. Further, the public display of these data is intended to allow OMB; other oversight bodies, including Congress; and the general public to hold the government agencies accountable for results and progress. The Dashboard was initially deployed in June 2009 based on each agency’s exhibit 53 and exhibit 300 submissions. After the initial population of data, agency CIOs have been responsible for updating cost, schedule, and performance fields on a monthly basis, which is a major improvement from the quarterly reporting cycle OMB previously used for the Management Watch List and High-Risk List. For each major investment, the Dashboard provides performance ratings on cost and schedule, a CIO evaluation, and an overall rating, which is based on the cost, schedule, and CIO ratings. As of July 2010, the cost rating is determined by a formula that calculates the amount by which an investment’s total actual costs deviate from the total planned costs. Similarly, the schedule rating is the variance between the investment’s planned and actual progress to date. Figure 1 displays the rating scale and associated categories for cost and schedule variations. Each major investment on the Dashboard also includes a rating determined by the agency CIO, which is based on his or her evaluation of the performance of each investment. The rating is expected to take into consideration the following criteria: risk management, requirements management, contractor oversight, historical performance, and human capital. This rating is to be updated when new information becomes available that would affect the assessment of a given investment. Last, the Dashboard calculates an overall rating for each major investment. This overall rating is an average of the cost, schedule, and CIO ratings, with each representing one-third of the overall rating. However, when the CIO’s rating is lower than both the cost and schedule ratings, the CIO’s rating will be the overall rating. Figure 2 shows the overall performance ratings of the 797 major investments on the Dashboard as of August 2011. We have previously reported that the cost and schedule ratings on OMB’s Dashboard were not always accurate for selected agencies. In July 2010, we reviewed investments at the Departments of Agriculture, Defense, Energy, Health and Human Services, and Justice, and found that the cost and schedule ratings on the Dashboard were not accurate for 4 of 8 selected investments and that the ratings did not take into consideration current performance; specifically, the ratings calculations factored in only completed activities. We also found that there were large inconsistencies in the number of investment activities that agencies report on the Dashboard. In the report, we recommended that OMB report on the effect of planned changes to the Dashboard and provide guidance to agencies to standardize activity reporting. We further recommended that the selected agencies comply with OMB’s guidance to standardize activity reporting. OMB and the Department of Energy concurred with our recommendations, while the other selected agencies provided no comments. In July 2010, OMB updated the Dashboard’s cost and schedule calculations to include both ongoing and completed activities. In March 2011, we reported that agencies and OMB need to do more to ensure the Dashboard’s data accuracy. Specifically, we reviewed investments at the Departments of Homeland Security, Transportation, the Treasury, and Veterans Affairs, and the Social Security Administration. We found that cost ratings were inaccurate for 6 of 10 selected investments and schedule ratings were inaccurate for 9 of 10. We also found that weaknesses in agency and OMB practices contributed to the inaccuracies on the Dashboard; for example, agencies had uploaded erroneous data, and OMB’s ratings did not emphasize current performance. We therefore recommended that the selected agencies provide complete and accurate data to the Dashboard on a monthly basis and ensure that the CIOs’ ratings of investments disclose issues that could undermine the accuracy of investment data. Further, we recommended that OMB improve how it rates investments related to current performance and schedule variance. The selected agencies generally concurred with our recommendation. OMB disagreed with the recommendation to change how it reflects current investment performance in its ratings because Dashboard data are updated on a monthly basis. However, we maintained that current investment performance may not always be as apparent as it should be; while data are updated monthly, the ratings include historical data, which can mask more recent performance. Most of the cost and schedule ratings on the Dashboard were accurate, but did not provide sufficient emphasis on recent performance to inform oversight and decision making. Performance rating discrepancies were largely due to missing or incomplete data submissions from the agencies. However, we generally found fewer such discrepancies than in previous reviews, and in all cases the selected agencies found and corrected these inaccuracies in subsequent submissions. In the case of GSA, officials did not disclose that performance data on the Dashboard were unreliable for one investment because of an ongoing baseline change. Without proper disclosure of pending baseline changes, the Dashboard will not provide the appropriate insight into investment performance needed for near-term decision making. Additionally, because of the Dashboard’s ratings calculations, the current performance for certain investments was not as apparent as it should be for near-real-time reporting purposes. If fully implemented, OMB’s recent and ongoing changes to the Dashboard, including new cost and schedule rating calculations and updated investment baseline reporting, should address this issue. These Dashboard changes could be important steps toward improving insight into current performance and the utility of the Dashboard for effective executive oversight. In general, the number of discrepancies we found in our reviews of selected investments has decreased since July 2010. According to our assessment of the eight selected investments, half had accurate cost ratings and nearly all had accurate schedule ratings on the Dashboard. Table 1 shows our assessment of the selected investments during a 6- month period from October 2010 through March 2011. As shown above, the Dashboard’s cost ratings for four of the eight selected investments were accurate, and four did not match the results of our analyses during the period from October 2010 through March 2011. Specifically, State’s Global Foreign Affairs Compensation System and Interior’s Land Satellites Data System investments had inaccurate cost ratings for at least 5 months, GSA’s System for Tracking and Administering Real Property/Realty Services was inaccurate for 3 months, and Interior’s Financial and Business Management System was inaccurate for 2 months. In all of these cases, the Dashboard’s cost ratings showed poorer performance than our assessments. For example, State’s Global Foreign Affairs Compensation System investment’s cost performance was rated “yellow” (i.e., needs attention) in October and November 2010, and “red” (i.e., significant concerns) from December 2010 through March 2011, whereas our analysis showed its cost performance was “green” (i.e., normal) during those months. Additionally, GSA’s System for Tracking and Administering Real Property/Realty Services investment’s cost performance was rated “yellow” from October 2010 through December 2010, while our analysis showed its performance was “green” for those months. Regarding schedule, the Dashboard’s ratings for seven of the eight selected investments matched the results of our analyses over this same 6-month period, while the ratings for one did not. Specifically, Interior’s Land Satellites Data System investment’s schedule ratings were inaccurate for 2 months; its schedule performance on the Dashboard was rated “yellow” in November and December 2010, whereas our analysis showed its performance was “green” for those months. As with cost, the Dashboard’s schedule ratings for this investment for these 2 months showed poorer performance than our assessment. There were three primary reasons for the inaccurate cost and schedule Dashboard ratings described above: agencies did not report data to the Dashboard or uploaded incomplete submissions, agencies reported erroneous data to the Dashboard, and the investment baseline on the Dashboard was not reflective of the investment’s actual baseline (see table 2). Missing or incomplete data submissions: Four selected investments did not upload complete and timely data submissions to the Dashboard. For example, State officials did not upload data for one of the Global Foreign Affairs Compensation System investment’s activities from October 2010 through December 2010. According to a State official, the department’s investment management system was not properly set to synchronize all activity data with the Dashboard. The official stated that this issue was corrected in December 2010. Erroneous data submissions: One selected investment—Interior’s Land Satellites Data System—reported erroneous data to the Dashboard. Specifically, Interior officials mistakenly reported certain activities as fully complete rather than partially complete in data submissions from September 2010 through December 2010. Agency officials acknowledged the error and stated that they submitted correct data in January and February 2011 after they realized there was a problem. Inconsistent investment baseline: One selected investment—GSA’s System for Tracking and Administering Real Property/Realty Services—reported a baseline on the Dashboard that did not match the actual baseline tracked by the agency. In June 2010, OMB issued new guidance on rebaselining, which stated that agencies should update investment baselines on the Dashboard within 30 days of internal approval of a baseline change and that this update will be considered notification to OMB. The GSA investment was rebaselined internally in November 2010, but the baseline on the Dashboard was not updated until February 2011. GSA officials stated that they submitted the rebaseline information to the Dashboard in January 2011 and thought that it had been successfully uploaded; however, in February 2011, officials realized that the new baseline was not on the Dashboard. GSA officials successfully uploaded the rebaseline information in late February 2011. Additionally, OMB’s guidance states that agency CIOs should update the CIO evaluation on the Dashboard as soon as new information becomes available that affects the assessment of a given investment. During an agency’s internal process to update an investment baseline, the baseline on the Dashboard will not be reflective of the current state of the investment; thus, investment CIO ratings should disclose such information. However, the CIO evaluation ratings for GSA’s System for Tracking and Administering Real Property/Realty Services investment did not provide such a disclosure. Without proper disclosure of pending baseline changes and resulting data reliability weaknesses, OMB and other external oversight groups will not have the appropriate information to make informed decisions about these investments. In all of the instances where we identified inaccurate cost or schedule ratings, agencies had independently recognized that there was a problem with their Dashboard reporting practices and taken steps to correct them. Such continued diligence by agencies to report accurate and timely data will help ensure that the Dashboard’s performance ratings are accurate. According to OMB, the Dashboard is intended to provide a near-real-time perspective on the performance of all major IT investments. Furthermore, our work has shown cost and schedule performance information from the most recent 6 months to be a reliable benchmark for providing this perspective on investment status. This benchmark for current performance provides information needed by OMB and agency executive management to inform near-term budgetary decisions, to obtain early warning signs of impending schedule delays and cost overruns, and to ensure that actions taken to reverse negative performance trends are timely and effective. The use of such a benchmark is also consistent with OMB’s exhibit 300 guidelines, which specify that project activities should be broken into segments of 6 months or less. In contrast, the Dashboard’s cost and schedule ratings calculations reflect a more cumulative view of investment performance dating back to the inception of the investment. Thus, a rating for a given month is based on information from the entire history of each investment. While a historical perspective is important for measuring performance over time relative to original cost and schedule targets, this information may be dated for near- term budget and programmatic decisions. Moreover, combining more recent and historical performance can mask the current status of the investment. As more time elapses, the impact of this masking effect will increase because current performance becomes a relatively smaller factor in an investment’s cumulative rating. In addition to our assessment of cumulative investment performance (as reflected in the Dashboard ratings), we determined whether the ratings were also reflective of current performance. Our analysis showed that two selected investments had a discrepancy between cumulative and current performance ratings. Specifically, State’s Global Foreign Affairs Compensation System investment’s schedule performance was rated “green” on the Dashboard from October 2010 through March 2011, whereas our analysis showed its current performance was “yellow” for most of that time. From a cumulative perspective, the Dashboard’s ratings for this investment were accurate (as previously discussed in this report); however, these take into account activities dating back to 2003. Interior’s Financial and Business Management System investment’s cost performance was rated “green” on the Dashboard from December 2010 through March 2011; in contrast, our analysis showed its current performance was “yellow” for those months. The Dashboard’s cost ratings accurately reflected cumulative cost performance from 2003 onward. Further analysis of the Financial and Business Management System’s schedule performance ratings on the Dashboard showed that because of the amount of historical performance data factored into its ratings as of July 2011, it would take a minimum schedule variance of 9 years on the activities currently under way in order to change its rating from “green” to “yellow,” and a variance of more than 30 years before turning “red.” We have previously recommended to OMB that it develop cost and schedule Dashboard ratings that better reflect current investment performance. At that time, OMB disagreed with the recommendation, stating that real-time performance is always reflected in the ratings since current investment performance data are uploaded to the Dashboard on a monthly basis. However, in September 2011, officials from OMB’s Office of E- Government & Information Technology stated that changes designed to improve insight into current performance on the Dashboard have either been made or are under way. If OMB fully implements these actions, the changes should address our recommendation. Specifically, New project-level reporting: In July 2011, OMB issued new guidance to agencies regarding the information that is to be reported to the Dashboard. In particular, beginning in September 2011, agencies are required to report data to the Dashboard at a detailed project level, rather than at the investment level previously required. Further, the guidance emphasizes that ongoing work activities should be broken up and reported in increments of 6 months or less. Updated investment baseline reporting: OMB officials stated that agencies are required to update existing investment baselines to reflect planned fiscal year 2012 activities, as well as data from the last quarter of fiscal year 2011 onward. OMB officials stated that historical investment data that are currently on the Dashboard will be maintained, but plans have yet to be finalized on how these data may be displayed on the new version of the Dashboard. New cost and schedule ratings calculations: OMB officials stated that work is under way to change the Dashboard’s cost and schedule ratings calculations. Specifically, officials said that the new calculations will emphasize ongoing work and reflect only development efforts, not operations and maintenance activities. In combination with the first action on defining 6-month work activities, the calculations should result in ratings that better reflect current performance. OMB plans for the new version of the Dashboard to be fully viewable by the public upon release of the President’s Budget for fiscal year 2013. Once OMB implements these changes, they could be significant steps toward improving insight into current investment performance on the Dashboard. We plan to evaluate the new version of the Dashboard once it is publicly available in 2012. Since our first review in July 2010, the accuracy of investment ratings on the Dashboard has improved because of OMB’s refinement of its cost and schedule calculations, and the number of discrepancies found in our reviews has decreased. While rating inaccuracies continue to exist, for the discrepancies we identified, the Dashboard’s ratings generally showed poorer performance than our assessments. Reasons for inaccurate Dashboard ratings included missing or incomplete agency data submissions, erroneous data submissions, and inconsistent investment baseline information. In all cases, the selected agencies detected the discrepancies and corrected them in subsequent Dashboard data submissions. However, in GSA’s case, officials did not disclose that performance data on the Dashboard were unreliable for one investment because of an ongoing baseline change. Additionally, the Dashboard’s ratings calculations reflect cumulative investment performance—a view that is important but does not meet OMB’s goal of reporting near-real-time performance. Our IT investment management work has shown a 6-month view of performance to be a reliable benchmark for current performance, as well as a key component of informed executive decisions about the budget and program. OMB’s Dashboard changes could be important steps toward improving insight into current performance and the utility of the Dashboard for effective executive oversight. To better ensure that the Dashboard provides accurate cost and schedule performance ratings, we are recommending that the Administrator of GSA direct its CIO to comply with OMB’s guidance related to Dashboard data submissions by updating the CIO rating for a given GSA investment as soon as new information becomes available that affects the assessment, including when an investment is in the process of a rebaseline. Because we have previously made recommendations addressing the development of Dashboard ratings calculations that better reflect current performance, we are not making additional recommendations to OMB at this time. We provided a draft of our report to the five agencies selected for our review and to OMB. In written comments on the draft, Commerce’s Acting Secretary concurred with our findings. Also in written comments, GSA’s Administrator stated that GSA agreed with our finding and recommendation and would take appropriate action. Letters from these agencies are reprinted in appendixes III and IV. In addition, we received oral comments from officials from OMB’s Office of E-Government & Information Technology and written comments via e-mail from an Audit Liaison from Interior. These comments were technical in nature and we incorporated them as appropriate. OMB and Interior neither agreed nor disagreed with our findings. Finally, an Analyst from Education and a Senior Management Analyst from State indicated via e-mail that they had no comments on the draft. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to interested congressional committees; the Director of OMB; the Secretaries of Commerce, Education, the Interior, and State; the Administrator of GSA; and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions on the matters discussed in this report, please contact me at (202) 512-9286 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. Our objective was to examine the accuracy of the cost and schedule performance ratings on the Dashboard for selected investments. We selected 5 agencies and 10 investments to review. To select these agencies and investments, we used the Office of Management and Budget’s (OMB) fiscal year 2011 exhibit 53 to identify 6 agencies with the largest information technology (IT) budgets, after excluding the 10 agencies included in our first two Dashboard reviews. We then excluded the National Aeronautics and Space Administration because it did not have enough investments that met our selection criteria. As a result, we selected the Departments of Commerce, Education, the Interior, and State, as well as the General Services Administration (GSA). In selecting the specific investments at each agency, we identified the largest investments that, according to the fiscal year 2011 budget, were spending at least 25 percent of their budget on IT development, modernization, and enhancement work. To narrow this list, we excluded investments that, according to the fiscal year 2011 budget, were in the planning phase or were infrastructure-related. We then selected the top 2 investments per agency. The 10 final investments were Commerce’s Geostationary Operational Environmental Satellite—Series R Ground Segment project and Advanced Weather Interactive Processing System, Education’s Integrated Partner Management system and National Student Loan Data System, Interior’s Financial and Business Management System and Land Satellites Data System, State’s Global Foreign Affairs Compensation System and Integrated Logistics Management System, and GSA’s Regional Business Application and System for Tracking and Administering Real Property/Realty Services. To assess the accuracy and currency of the cost and schedule performance ratings on the Dashboard, we evaluated, where available, agency or contractor documentation related to cost and schedule performance for 8 of the selected investments to determine their cumulative and current cost and schedule performance and compared our ratings with the performance ratings on the Dashboard. The analyzed investment performance-related documentation included program management reports, internal performance management system performance ratings, earned value management data, investment schedules, system requirements, and operational analyses. To determine cumulative cost performance, we weighted our cost performance ratings based on each investment’s percentage of development spending (represented in our analysis of the program management reports and earned value data) and steady-state spending (represented in our evaluation of the operational analysis), and compared our weighted ratings with the cost performance ratings on the Dashboard. To evaluate earned value data, we determined cumulative cost variance for each month from October 2010 through March 2011. To assess the accuracy of the cost data, we electronically tested the data to identify obvious problems with completeness or accuracy, and interviewed agency and program officials about the earned value management systems. We did not test the adequacy of the agency or contractor cost-accounting systems. Our evaluation of these cost data was based on what we were told by each agency and the information it could provide. To determine cumulative schedule performance, we analyzed requirements documentation to determine whether investments were on schedule in implementing planned requirements. To perform the schedule analysis of the earned value data, we determined the investment’s cumulative schedule variance for each month from October 2010 through March 2011. To determine both current cost and schedule performance, we evaluated investment data from the most recent 6 months of performance for each month from October 2010 through March 2011. We were not able to assess the cost or schedule performance of 2 selected investments, Education’s Integrated Partner Management investment and National Student Loan Data System investment. During the course of our review, we determined that the department did not establish a validated performance baseline for the Integrated Partner Management investment until March 2011. Therefore, the underlying cost and schedule performance data for the time frame we analyzed were not sufficiently reliable. We also determined during our review that the department recently rescoped development work on the National Student Loan Data System investment and did not have current, representative performance data available. Further, we interviewed officials from OMB and the selected agencies to obtain additional information on agencies’ efforts to ensure the accuracy of the data used to rate investment performance on the Dashboard. We used the information provided by agency officials to identify the factors contributing to inaccurate cost and schedule performance ratings on the Dashboard. We conducted this performance audit from February 2011 to November 2011 at the selected agencies’ offices in the Washington, D.C., metropolitan area. Our work was done in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objective. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objective. Below are descriptions of each of the selected investments that are included in this review. The Advanced Weather Interactive Processing System is used to ingest, analyze, forecast, and disseminate operational weather data. Enhancements currently being implemented to the system are intended to improve the system’s infrastructure and position the National Weather Service to meet future requirements in the years ahead. The Geostationary Operational Environmental Satellite—Series R Ground Segment includes the development of key systems needed for the on- orbit operation of the next generation of geostationary operational environmental satellites, receipt and processing of information, and distribution of satellite data products to users. The Integrated Partner Management investment is to replace five legacy applications and provide, in one solution, improved eligibility, enrollment, and oversight processes for schools, lenders, federal and state agencies, and other entities that administer financial aid to help students pay for higher education. The National Student Loan Data System includes continued operations and maintenance of an application that manages the integration of data regarding student aid applicants and recipients. The investment also includes a development portion that is intended to ensure that reporting and data collection processes are in place to efficiently determine partner eligibility to participate in higher education financial aid programs, and ensure only eligible students receive loans, grants, or work study awards. The Financial and Business Management System is an enterprisewide system that is intended to replace most of the department’s administrative systems, including budget, acquisitions, financial assistance, core finance, personal and real property, and enterprise management information systems. The Land Satellites Data System investment includes the continued operation of Landsat satellites and the IT-related costs for the ground system that captures, archives, processes, and distributes data from land- imaging satellites. The development efforts under way are intended to enable the U.S. Geological Survey to continue to capture, archive, process, and deliver images of the earth’s surface to customers. The Global Foreign Affairs Compensation System is intended to enable the department to replace six obsolete legacy systems with a single system better suited to support the constant change of taxation and benefits requirements in more than 180 countries, and to help the department make accurate and timely payments to its diverse workforce and retired Foreign Service officers. The Integrated Logistics Management System is the department’s enterprisewide supply chain management system. It is intended to be the backbone of the department’s logistics infrastructure and provide for requisition, procurement, distribution, transportation, receipt, asset management, mail, diplomatic pouch, and tracking of goods and services both domestically and overseas. The Regional Business Application includes three systems that are intended to provide a means to transition from a semi-automated to an integrated acquisition process, and provide tools to expedite the processing of customer funding documents and vendor invoices. The System for Tracking and Administering Real Property/Realty Services investment includes continued operations of a transaction processor that supports space management, revenue generation, and budgeting. The investment also includes development of a new system that is intended to simplify user administration and reporting, and improve overall security. Table 3 provides additional details for each of the selected investments in our review. In addition to the contact named above, the following staff also made key contributions to this report: Carol Cha, Assistant Director; Emily Longcore; Lee McCracken; Karl Seifert; and Kevin Walsh. | Each year the federal government spends billions of dollars on information technology (IT) investments. Given the importance of program oversight, the Office of Management and Budget (OMB) established a public website, referred to as the IT Dashboard, that provides detailed information on about 800 federal IT investments, including assessments of actual performance against cost and schedule targets (referred to as ratings). According to OMB, these data are intended to provide both a near-real-time and historical perspective of performance. In the third of a series of Dashboard reviews, GAO was asked to examine the accuracy of the Dashboard's cost and schedule performance ratings. To do so, GAO compared the performance of eight major investments undergoing development from four agencies with large IT budgets (the Departments of Commerce, the Interior, and State, as well as the General Services Administration) against the corresponding ratings on the Dashboard, and interviewed OMB and agency officials. Since GAO's first report in July 2010, the accuracy of investment ratings has improved because of OMB's refinement of the Dashboard's cost and schedule calculations. Most of the Dashboard's cost and schedule ratings for the eight selected investments were accurate; however, they did not sufficiently emphasize recent performance for informed oversight and decision making. (1) Cost ratings were accurate for four of the investments that GAO reviewed, and schedule ratings were accurate for seven. In general, the number of discrepancies found in GAO's reviews has decreased. In each case where GAO found rating discrepancies, the Dashboard's ratings showed poorer performance than GAO's assessment. Reasons for inaccurate Dashboard ratings included missing or incomplete agency data submissions, erroneous data submissions, and inconsistent investment baseline information. In all cases, the selected agencies found and corrected these inaccuracies in subsequent Dashboard data submissions. Such continued diligence by agencies to report complete and timely data will help ensure that the Dashboard's performance ratings are accurate. In the case of the General Services Administration, officials did not disclose that performance data on the Dashboard were unreliable for one investment because of an ongoing baseline change. Without proper disclosure of pending baseline changes, OMB and other external oversight bodies may not have the appropriate information needed to make informed decisions. (2) While the Dashboard's cost and schedule ratings provide a cumulative view of performance, they did not emphasize current performance--which is needed to meet OMB's goal of reporting near-real-time performance. GAO's past work has shown cost and schedule performance information from the most recent 6 months to be a reliable benchmark for providing a near-real-time perspective on investment status. By combining recent and historical performance, the Dashboard's ratings may mask the current status of the investment, especially for lengthy acquisitions. GAO found that this discrepancy between cumulative and current performance ratings was reflected in two of the selected investments. For example, a Department of the Interior investment's Dashboard cost rating indicated normal performance from December 2010 through March 2011, whereas GAO's analysis of current performance showed that cost performance needed attention for those months. If fully implemented, OMB's recent and ongoing changes to the Dashboard, including new cost and schedule rating calculations and updated investment baseline reporting, should address this issue. These Dashboard changes could be important steps toward improving insight into current performance and the utility of the Dashboard for effective executive oversight. GAO plans to evaluate the new version of the Dashboard once it is publicly available in 2012. GAO is recommending that the General Services Administration disclose on the Dashboard when one of its investments is in the process of a rebaseline. Since GAO previously recommended that OMB improve how it rates investments relative to current performance, it is not making further recommendations. The General Services Administration agreed with the recommendation. OMB provided technical comments, which GAO incorporated as appropriate. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The U.S. federal budget serves as the primary financial plan of the federal government and thus plays a critical role in the decision-making process. Policymakers, managers, and the American people rely on it to frame their understanding of significant choices about the role of the government and to provide them with information to make decisions about individual programs and overall fiscal policy. The budget process helps highlight for policymakers and the public the overall “cost” of government. Since the budget process also serves as a key point of accountability between policymakers and managers, the way “costs” are measured and reported in the budget can have significant consequences for managerial incentives. The term “cost” has different meanings in the budget and financial statements. In the budget, the term “cost” generally refers to the amount of cash needed during the period. In the financial statements, the term “cost” means the amount of resources used to produce goods or deliver services during the period regardless of when cash is used. Therefore, one goal of accrual budgeting is to report the “full cost” of government services provided during the year. The different methods of reporting (e.g., cash, obligations, or accrual) represent much more than technical means of cost measurement. They reflect fundamental choices about the information and incentives provided by the budget. Cash-based measurement records receipts and outlays when cash is received or paid, without regard to when the activity occurs that results in revenue being earned, resources being consumed, or liabilities being increased. In comparison, obligation-based budgeting—which is used in the U.S. federal government—focuses on the legal obligations entered into during a period regardless of when cash is paid or received and regardless of when resources acquired are to be received or consumed. Obligation- based budgeting provides an additional level of control over pure cash budgeting by requiring that federal agencies have statutory authority to enter into obligations to make outlays of government funds. With limited exceptions, the amounts to be obligated are measured on a cash or cash- equivalent basis. Therefore, we generally refer to the U.S. federal budget as “cash based.” In contrast to cash- and obligation-based budgeting, accrual budgeting generally involves aligning budget recognition with the period in which resources are consumed or liabilities increased, rather than when obligations are made or cash flows occur. Although accruals can be measured in a variety of ways, the term accrual budgeting typically has been used in case study countries to refer to the recording of budgetary costs based on concepts in financial accounting standards. Thus, accrual- based budgeting generally provides information similar to that found in a private sector operating statement. Choices about the appropriate method of budget reporting are complicated by the multiplicity of the budget’s uses and users, including policymakers and managers. The federal budget is simultaneously asked to provide full information and appropriate incentives for resource allocation, control over cash, recognition of future commitments, and the monitoring of performance. Given these multiple and potentially competing objectives, choices about the method of budget reporting involve trade-offs. For example, control over spending is greatest if the budget recognizes the full cash cost at the time the decision is made but assessing performance and its cost is generally best supported by accrual- based cost information, which recognizes resources as they are used to produce goods and services. The up-front funding requirement under an obligation-based budget helps ensure policymakers’ control over the acquisition of a new building but does not align its cost with its use. Conversely, accrual budgeting better aligns the cost of the building with the periods that benefit from its use, but in its simplest form it does not provide for up-front control over entering a legally binding commitment to purchase the building. Given the necessary trade-offs, the method of budget reporting should be selected to meet the primary decision-making and accountability needs of a governmental system while balancing the needs of multiple users. The federal government reports both cash and accrual measures of its current finances. The key focus of the policy debate is the unified budget deficit/surplus. With limited exceptions, the unified budget deficit/surplus is the difference between cash receipts and cash o the government as a whole including any Social Security surplus. utlays for The second measure, the government’s net operating cost, is the amount by mount by which costs—as reported on an accrual basis—exceed revenue and is which costs—as reported on an accrual basis—exceed revenue and is reported in the federal government’s financial statements. Figure 1 show reported in the federal government’s financial statements. The consolidated financial statements of the U.S. government are largely on an accrual basis. See Department of the Treasury, Financial Report of the United States Government, 2006. GAO is responsible for auditing the financial statements included in the Financial Report, but we have been unable to express an opinion on them because the federal government could not demonstrate the reliability of significant portions of the financial statements. Accordingly, amounts taken from the Financial Report may not be reliable. when cash payments are made. For many program areas, the timing difference is small but for others the timing differences can amount to billions of dollars each year. Differences arise when a cost is accrued (and affects the accrual deficit) in one fiscal year but paid (and affects the cash deficit) in another fiscal year. The following six areas account for the largest differences between cash and accrual deficits: civilian employee benefits, military employee benefits, veterans compensation, environmental liabilities (e.g., cleanup and disposal), insurance programs, and capital assets. For example, the accrual deficit includes an expense for current employees’ pension and other retirement benefits, which are earned during the employee’s working years and are part of the annual cost of providing government services but not paid until sometime in the future when the employee retires. The cash budget deficit does not include retirement benefits earned today, but it does reflect payments made to current retirees. (These cash payments reflect past accrued expenses.) The difference between the accrued retirement benefits recognized and cash payments made during the year is the difference between the accrual and cash measures due to employee benefits. In the year that capital assets such as structures and equipment are purchased, the budget recognizes the full cash cost to provide decision makers with the information and incentives to make efficient decisions at the only time that they can control the cost. Specifically, budget authority for the asset’s full cash cost must generally be provided up front before the asset can be purchased. The full cash cost of a capital asset is recorded as an outlay and included in the cash budget deficit when the asset is paid for. However, under the accrual basis of accounting used in the financial statements, the cash cost of the asset is initially recorded on the balance sheet. The cash cost of the asset is then spread over its expected useful life to match the asset’s cost with its use. Therefore, each year the accrual deficit only reflects one year’s worth of the cash cost, called depreciation expense. We have previously noted that while both cash and accrual measures of the government’s overall finances are informative, neither measure alone provides a full picture. For example, the unified budget deficit provides information on borrowing needs and current cash flow, but does not measure the amount of resources used to provide goods or services in the current year. While the accrual deficit provides information on resources used in the current year, it does not provide information on how much the government has to borrow in the current year to finance government activities. Nor does it provide information about the timing of payments and receipts, which can be very important. Therefore, just as investors need income statements, statements of cash flow, and balance sheets to understand a business’s financial condition, both cash and accrual measures are important for understanding the government’s financial condition. Although a more complete picture of the government’s fiscal stance today and over time comes from looking at both the cash and accrual measures than from looking at either alone, even the two together do not provide sufficient information on our future fiscal challenges. In addition to considering the federal government’s current financial condition, it is critical to look at other measures of the long-term fiscal outlook of the federal government. While there are various ways to consider and assess the long-term fiscal outlook, any analysis should include more than just the obligations and costs recognized in the budget and financial statements. It should take account of the implicit promises embedded in current policy and the timing of these longer-term obligations and commitments in relation to the resources available under various assumptions. For example, while the cash and accrual measures showed improvement between fiscal year 2005 and fiscal year 2007, our long-term fiscal outlook did not change. In fact, the U.S. government’s total reported liabilities, net social insurance commitments, and other fiscal exposures continue to grow and total more than $52 trillion, representing approximately four times the nation’s total output, or gross domestic product (GDP), in fiscal year 2007, up from about $20 trillion, or two times GDP in fiscal year 2000 (see table 1). Another way to assess the U.S. government’s long-term fiscal outlook and the sustainability of federal programs is to run simulations of future revenues and spending for all federal programs, based on a continuation of current or proposed policy. Long-term simulations by GAO, the Congressional Budget Office, and others show that we face large and growing structural deficits driven primarily by rising health care costs and known demographic trends. As shown in figure 2, GAO’s long-term simulations—which are neither forecasts nor predictions—continue to show ever-increasing long-term deficits resulting in a federal debt level that ultimately spirals out of control. The timing of deficits and the resulting debt buildup varies depending on the assumptions used, but under either optimistic (“Baseline Extended”) or more realistic assumptions (“Alternative simulation”), the federal government’s current fiscal policy is unsustainable. One summary measure of the long-term fiscal challenge is called “the fiscal gap.” The fiscal gap is the amount of spending reduction or tax increases that would be needed today to meet some future debt target. To keep debt as a share of GDP at or below today’s ratio under our Alternative simulation would require spending cuts or tax increases equal to 7.5 percent of the entire economy each year over the next 75 years, or a total of about $54 trillion in present value terms. To put this in perspective, closing the gap would require an immediate and permanent increase in federal tax revenues of more than 40 percent or an equivalent reduction in federal program spending (i.e., in all spending except for interest on the debt held by the public, which cannot be directly controlled). As demonstrated by these various measures, our nation is on an unsustainable fiscal path. This path increasingly will constrain our ability to address emerging and unexpected budgetary needs and will increase the burdens that will be faced by future generations. Since at its heart the budget debate is about the allocation of limited resources, the budget process can and should play a key role in helping to address our long-term fiscal challenge. The six countries reviewed in 2000 continue to use accrual budgeting. However, two countries that were considering broader expansions of accrual budgeting have thus far made only limited changes. Although each country’s budgeting framework has unique features, the six countries have taken one of two broad approaches toward accrual budgeting: One approach uses accruals for most or all items in the budget primarily to support broader efforts to improve government performance. A second approach more selectively uses accrual information in areas where it increases recognition of future cash requirements related to services provided during the year that are not fully recognized in a cash-based budget. Regardless of which approach is used, cash information remains important in all the countries to evaluate overall fiscal position. None of the countries reviewed include anticipated future payments for social insurance programs (namely public pensions and health services) in the current year’s budget measure. Social insurance programs are generally viewed as transfer payments rather than liabilities. Transfer payments are benefits provided without requiring the recipient to provide current or future goods or services of equivalent value in return. Since 2000, three countries—Australia, New Zealand, and Iceland—have continued to use the accrual budgeting frameworks in place in 2000. In 2000, we reported that the United Kingdom was planning to implement an accrual-based budgeting framework, called Resource Accounting and Budgeting. After Parliament passed the necessary legislation in 2000, the United Kingdom implemented resource accounting and budgeting in 2001. The United Kingdom has continued to make some modifications to its framework, including introduction of controls over cash. Although two countries—the Netherlands and Canada—have considered broader expansions of accrual budgeting since 2000, thus far they have made only limited changes. In the Netherlands only budgets for some government agencies are on an accrual basis and the governmentwide budget remains on a modified cash basis. The Dutch government decided against moving the governmentwide budget to an accrual basis in 2001. Although the Dutch cabinet thought that the accrual-based system added value at the agencies where it had been implemented, it thought the cost of implementing accrual budgeting governmentwide, including changing information systems, developing accounting standards, and changing regulations would outweigh any advantages. In 2003 Canada significantly expanded the use of accruals in the governmentwide budget, but the information used to support appropriations (called the Main Estimates) and the appropriations themselves remain largely on a cash basis. Since the 1990s, there has been debate within the Canadian government concerning the appropriate application of accruals. The Canadian Office of the Auditor General and a key committee in Parliament, the House of Commons Committee on Public Accounts, have advocated preparing the Main Estimates on a full accrual basis. The current government agrees in principle that accrual measurement can be useful but considers this to be a complex issue that requires study and consultation with parliamentarians. After consultation with parliamentarians, the current government plans to present a model for a new accrual-based appropriations process in 2008. Although the use of accrual budgeting in other major industrialized countries has grown, it is not currently the norm. Since 2000, the number of OECD countries that report using accruals at least in part has increased. For example, as noted previously, Denmark and Switzerland recently expanded the use of accruals in the budget. Some countries also report using both cash- and accrual-based accounting in the budget. However, the majority of OECD countries reported using either cash- or obligation-based budgeting or both. The extent to which countries in our study used accrual budgeting varied—from full accrual at all levels of government to more limited use at either the agency or program level. Figure 3 illustrates the broad range of use. The extent to which countries use accrual budgeting generally reflects the objectives to be satisfied. Countries that switched to accrual budgeting primarily as a way of providing better cost and performance information for decision making generally used accruals to a greater extent in the budget, as illustrated by the first two approaches—full accrual at all levels of government. In general, these countries also sought to put financial reporting and budgeting on a consistent basis. Countries that switched to accrual budgeting primarily as a way of increasing recognition of future cash requirements related to services provided during the year generally use it only for selected programs where accruals enhance up-front control and provide better information for decision making (e.g., loans and government employee pensions); this approach is similar to the United States’ current use of accruals. Regardless of the approach, cash information remains important. Most countries in our study continue to use cash-based measures for broad fiscal policy decisions. The following section describes each country’s objective and approach in more detail. Four countries—Australia, New Zealand, the Netherlands, and the United Kingdom—primarily use accrual budgeting to support broader efforts to improve the efficiency and performance of the public sector. Compared to cash-based budgeting, accruals are thought to provide better cost information and to encourage better management of government assets and liabilities. Among this group of countries, however, there is significant variation in the scope of accrual budgeting as well as the linkage between performance goals and appropriations. Since the 1990s, Australia and New Zealand have extensively used accruals in conjunction with output-based budgeting. The introduction of accrual budgeting in both countries was a key element of broader reforms meant to improve the efficiency and performance of the public sector. Reformers in both countries thought that accruals would provide better cost information and better management incentives than the previous cash- based budgeting framework. Reformers also thought it was important to have a consistent framework for budgeting and financial reporting to allow actual performance to be compared with expectations. Accrual budgeting in both countries is also intended to provide funding for the full cost of departments’ activities. Australia and New Zealand departments receive funding for noncash expenses, such as depreciation of existing assets, accrued employee pension benefits, and the estimated future costs of environmental cleanup resulting from government activities. Reformers in both countries thought that appropriating on a full- cost basis created compelling incentives for department managers to focus on the full cost of their department’s activities as well as manage noncash expenses. One important feature of Australia’s and New Zealand’s budgeting frameworks is that departmental appropriations are closely linked to outcomes and outputs, and department executives are given considerable flexibility in managing their department’s finances, provided that the department meets its performance goals. It is thought that giving department executives more flexibility generally contributes to better performance. In comparison to the United States, the appropriations acts in Australia and New Zealand place less emphasis on how departments allocate their funding among different types of expenses. Nevertheless, two key departments, the Treasury in New Zealand and the Department of Finance and Administration in Australia, do centrally review and must approve departmental plans for major capital purchases. The Netherlands has used accrual budgeting in select government agencies primarily as a tool for improving performance. In the early 1990s, the government allowed a limited number of government entities (called agencies) to operate as if they were private sector contractors by adopting a results-oriented performance-management model, including accrual accounting and budgeting. Under the Dutch approach, the agencies are effectively service providers for the central government’s ministries. These agencies receive funding for the accrual-based cost from the ministries that they service. For example, although the Ministry of Justice is appropriated funds on a cash basis to buy services from the Prison Service, the Prison Service charges the ministry the full cost of the services it provides. The number of government entities participating in this program has increased from 22 in 2000 to approximately 40 in mid 2007. However, while the agencies budgeting on an accrual basis represent about 60 percent of the government in terms of employees, they are a small part of the government’s overall budget since the majority of the Dutch government’s expenditures are spent on transfer payments, which continue to be budgeted on a cash basis. The United Kingdom implemented what it calls resource budgeting for financial year 2001–2002. The United Kingdom’s approach makes less use of the Australia–New Zealand form of performance-based budgeting and imposes tighter controls on cash than the Australia and New Zealand approaches. The United Kingdom’s Parliament votes both cash and “resources” (i.e., the full accrual-based cost of a department’s services). The resource budget recognizes such noncash expenses as accrued employee pension benefits as well as depreciation of existing assets but limits the ability of departments to use funds appropriated for noncash items to fund current spending. Treasury officials from the United Kingdom told us that in practice this near-cash limit on departmental spending is the focus of budgetary planning. Treasury officials also noted that although departments have public service agreements that include performance targets, the United Kingdom has not really used outcome- based budgeting. A second approach has been to use accrual information more selectively for programs or areas where it highlights annual costs that are not fully recognized in the cash-based budget. Iceland and Canada generally have taken this approach. Since 1998, Iceland has budgeted on an accrual basis except for capital expenditures, which remain on a cash basis. Iceland’s approach was designed primarily to improve transparency and accountability in its budget. The only areas with significant differences between cash- and accrual-based estimates are government employee pensions, interest, and tax revenue. Iceland also uses accrual budgeting for loan programs. Accrual budgeting in Iceland has had only a limited effect on department- level budgets for two reasons. First, capital budgeting remains on a cash basis. Second, the oversight and administration of employee pensions, tax revenue, and the subsidy costs for loans are located in the Finance Ministry, not individual departments. Consequently, for most Icelandic departments, there are only minor differences between cash- and accrual- based estimates for the department’s operating budgets. The federal government of Canada currently uses both accrual and cash for budgeting purposes. The governmentwide budget is largely on an accrual basis; the information used to support appropriations (called the Main Estimates) and the appropriations themselves remain largely on a cash basis; certain areas such as the future pensions for current employees are measured on an accrual basis. Canada’s current government has been considering moving the Main Estimates and appropriations to a full accrual basis. Since the 1990s, the Canadian Office of the Auditor General and a key parliamentary committee, the House of Commons Committee on Public Accounts, have recommended moving appropriations to an accrual basis so that managers would make more informed decisions about the use of resources. The Office of the Auditor General and the committee think it is important to use the same accounting standards in the budget and the Estimates. The current government agrees that moving to accrual- based budget and appropriations may have benefits. Officials from Canada’s Finance Department and Treasury Board Secretariat told us that it was important to study the experience of other governments with accruals before designing a new, accrual-based appropriations process. The officials also said the current government was consulting with members of Parliament and plans to present a model for Parliament’s consideration in 2008. Regardless of the approach taken in use of accrual budgeting, all of the countries consider cash information to be important, particularly for monitoring the country’s fiscal position even where fiscal indicators are accrual based. Three of the countries—Australia, the Netherlands, and the United Kingdom—calculate the governmentwide surplus/deficit on either a cash or near-cash basis. In the other three countries—Iceland, New Zealand, and Canada—aggregate fiscal indicators are largely accrual based, but officials we spoke with said that cash information continues to be important in evaluating fiscal policy. Although Australia extensively uses accruals for departmental appropriations, Australian officials said that a key measure for policymakers is the country’s surplus measured on a cash basis. This is due in part to a goal of running cash-based surpluses over the business cycle to contribute to national savings. Both the Netherlands and the United Kingdom, as members of the European Union (EU), are required to report the net lending or borrowing requirement, which officials described as a near-cash number. Officials from the United Kingdom also said that cash information is important because the current government has pledged to avoid borrowing to finance current expenditures and to keep net debt at prudent levels. New Zealand makes several adjustments to the accrual-based operating balance to remove items that do not affect the underlying financing of government and must pay attention to its cash position to ensure it meets its debt-to-GDP target. Since 2000, at least two additional OECD countries—Denmark and Switzerland—have expanded the use of accruals in the budget without moving to full accrual budgeting. Switzerland has recently expanded accrual measurement as part of broader reforms to improve government financial reporting. However, Switzerland’s governmentwide surplus/deficit continues to be calculated on a cash basis and some government assets, such as defense assets, are not capitalized. Beginning in 2007, Denmark moved departmental operating budgets and associated capital spending to an accrual basis, primarily to support efforts to improve the performance of government departments. However, Denmark does not accrue capital spending on infrastructure, and both grants and transfer payments are measured on a cash basis. Sweden and Norway considered moving toward accrual budgeting but decided against it. Between 1999 and 2003 Sweden developed a plan to move from cash to accrual budgeting but in 2004 chose not to implement these plans. Swedish officials said that the government was concerned that accrual budgeting would diminish control of cash spending, potentially undermine fiscal discipline and lead to bigger investments, principally for infrastructure and war equipment. Norway went through a similar decision process. In 2003, a government-appointed committee recommended Norway move to full accrual budgeting, but the government at that time argued that the fiscal policy role of the budget is better served by cash-based appropriations and that the cash system enables better control of investments. Parliament agreed. However, Norway is testing accrual accounting at 10 agencies to achieve purposes similar to those cited by other countries—namely to provide better cost information; to establish a baseline for benchmarking costs, both between government agencies and in relation to private organizations; and to generate more complete information on the assets and liabilities of the government. Any significant expansion in the use of accruals creates a number of transitional challenges, including how to develop accounting standards for the budget and deciding what assets to value and how to value them. Beyond transitional issues however, there are several challenges inherent to accrual budgeting, as we noted in 2000. These challenges illustrate the inherent complexity of using accrual-based numbers for managing a nation’s resources and led to some modifications in countries’ use of accrual reporting in the budget, such as reliance on more cash-based measures of the overall budget. Developing accounting standards to use in the budget and deciding what public assets to value and how to value them were initial challenges for countries moving to accrual budgeting. These took time to work out and refinements continue. Some countries in our study sought to put the government’s financial reporting and budgeting on the same basis and to make them comparable to the private sector. In all, three of the six countries in our 2000 report and Denmark said that the technical standards used in the budget were substantially based on private-sector accounting standards. Only Canada and Switzerland said the technical standards were based on public sector accounting standards. Three countries—Australia, the Netherlands, and the United Kingdom—reported that the standards used for aggregate measures were based on national accounting standards (similar to the national income and product accounts in the United States) set by an international organization (e.g., the International Monetary Fund’s Government Finance Statistics or the European System of Accounts). Some countries in our study thought that adopting standards and concepts developed by independent bodies was important. While both cash and accrual accounting can be subject to gaming, some believe that accrual accounting in particular opens up the opportunity for manipulation. Three countries responded that a commission of experts outside of government developed the standards. Other countries, however, said that although their standards were based on independent standards, the finance ministry or bureau of statistics has the ultimate responsibility for developing standards. In these countries, accounting standards were generally not adopted intact from an independent entity. For example, Switzerland’s accrual budgeting system is designed to be closely aligned with the international public sector accounting standards (IPSAS), but there were some deviations from IPSAS for constitutional reasons such as compliance with the cash-based balanced budget requirement. Also, for practical reasons, Switzerland does not capitalize defense investments, which is required under IPSAS. Besides developing the accounting standards to be used in the budget, a key challenge when switching to accrual budgeting, particularly for countries that choose to treat capital on an accrual basis (i.e., to capitalize assets and record them on the balance sheet) and provide funding for noncash depreciation costs, is to ensure that the recorded value of the capital asset is as accurate as possible. The value of the capital asset is used to calculate annual depreciation costs and in turn fund future capital acquisitions (replacements). If an agency overvalued its assets, it could be difficult to reduce the level of assets once accrual budgeting is implemented because the excess value represents a source of funding for the agency in the form of depreciation. On the flipside, if assets were undervalued, they may not provide good information on the cost of maintaining or replacing the asset. In 2004, for example, the New Zealand government purchased the nation’s rail network for only NZ$1. Officials with whom we spoke said the NZ$1 value did not yield good information about annual depreciation (maintenance) costs. Therefore the New Zealand government revalued the network at NZ$10.3 billion in 2006; this revaluation led to an increase in the New Zealand government’s net worth. More importantly, the annual operating balance used in the budget now reflects the associated depreciation costs. In Australia, the government thought that capitalizing assets would lead to a better understanding of what is owned and what would be needed in the future. However, an Australian official said departments still request supplementary funding to replace old assets. An Australian official said that this may be because some departments were not fully funded for all capitalized assets in their opening balance sheets during the move to accrual budgets. It could also be because new asset purchases are not identical to the assets they replace or because agencies did not have sufficient assets to carry out their goals in the first place. Asset identification and valuation were cumbersome and time-consuming efforts for the countries that chose to capitalize assets. Indeed, one of the reasons that Iceland decided against capitalizing assets was the difficulty it would have faced identifying and agreeing on the asset values. Valuing assets poses special problems in the public sector since it owns unique assets such as heritage assets (e.g., museums and national parks) and defense assets (e.g., weapons and tanks). By nature, heritage assets are generally not marketable. Their cost is often not determinable or relevant to their significance and they may have very long life cycles (e.g., hundreds of years). Although the recognition issues associated with heritage assets are challenging, these assets are generally not very significant in terms of the overall effect on fiscal finances. As a result, valuing heritage assets may be seen as not worth the effort. Indeed, of all the countries we reviewed, only Australia and New Zealand capitalize all assets. The other countries exclude unique government assets such as highways, bridges, national parks, historical buildings, and military assets. The most common approaches for valuing assets are historical cost and fair value. (Fair value is usually the same as market value; in the absence of reliable market values, replacement cost is often used.) Five of seven countries in our study that measure capital assets on an accrual basis use fair or market value. Only two—Canada and Denmark—use historical cost. Use of market value relies on professional judgments to assess values and the values can fluctuate sharply between reporting periods. Although historical cost is based on a verifiable acquisition price and does not fluctuate, the reported amounts may not reflect the current value of the asset. Furthermore, it is often very difficult to estimate the original costs of government assets that are hundreds of years old or for which cost records have not been maintained. We have reported that enhancing the use of performance and “full-cost” information in budgeting is a multifaceted challenge that must build on reliable cost and performance data, among other things. Reliable financial information was also viewed as important to have before moving to accrual budgeting in some countries we reviewed. For example, in the Netherlands, an agency must receive a “clean audit” or an unqualified audit opinion for the year prior to moving to accrual budgeting and at least 6 months must have been spent in a trial run of the accrual accounting system. Other criteria must also be met before moving to accrual-based budgeting and receiving the associated flexibilities including being able to describe and measure the agency’s products and services. Before moving to accrual budgeting in New Zealand, a department had to define its broad classes of outputs, develop an accrual-based system capable of monthly and annual reporting, and develop a cost-allocation system to allocate all input costs including depreciation and overhead to outputs and provide assurance it had an adequate level of internal controls. There was not, however, a requirement for an unqualified opinion for the agency. Accrual budgeting can also lead to improvements in financial information. Auditable financial accounts were not a prerequisite for moving to accrual budgeting in the United Kingdom. When the United Kingdom moved to accrual budgeting in 2001–2002, the government had 16 accounts for central government departments with “qualified” opinions. However, since the introduction of accrual budgeting, the United Kingdom reported that the number of qualified accounts had declined and the timeliness of financial reporting, which maximizes the usefulness of the information to managers, Parliament, and other stakeholders, has improved. Both cash and accrual measures are subject to volatility. Cash accounting may not be useful for measuring cost because spikes in receipts or payments can cause swings in the apparent “cost” of a program or activity. For example, if a program purchases a large amount of equipment in one year, it will appear costly under cash accounting, but under accrual accounting, only a proportion of the equipment’s cost in the form of depreciation would be shown in that year. Accrual measures experience volatility for other reasons such as changes in the value of assets and liabilities or changes in assumptions (e.g., interest rates, inflation, and productivity) used to estimate future payments. Because the accrual-based operating results can be volatile due to events outside the government’s control, New Zealand generally does not use it as a measure of the government’s short-term fiscal stewardship. For example, under New Zealand’s accrual-based accounting standards, most assets are revalued at least every 3 years. New Zealand uses fair value, which is usually the same as market value when there is an active market. As noted above, market values tend to fluctuate between reporting periods. The changing market values can cause swings in the reported accrual-based operating results because such changes are reflected as revenue or cost in the year revalued. Therefore, changes in operating results may reflect not a fundamental change to the government’s finances but rather changes in the value of assets or liabilities that do not affect the government’s financing in the current period. Fluctuations can also result from annual changes in the value of liabilities when there are deviations between actual experience and the actuarial assumptions used or changes in actuarial assumptions. The liabilities for New Zealand’s government pension and insurance programs, for example, fluctuate from year to year partly due to changes in the underlying assumptions such as interest rates and inflation. To deal with this, the New Zealand Treasury removes revaluations and other movements that do not reflect the underlying financing of government from its operating balance. It is this measure— the Operating Balance Excluding Revaluations and Accounting Changes (OBERAC)—that has been the focus of policy debates in New Zealand since about 2001. More recently the New Zealand Treasury shifted its focus to a new measure—Operating Balance Excluding Gains and Losses (OBEGAL). Gains and losses can result when the value of an asset or liability differs from the value booked on the balance sheet. If the government sells an asset and the sales price equals book value, there is no gain or loss, because a cash inflow equal to book value is the exchange of one asset for another of equal recorded value. However, if the sales price is more or less than the book value of the property, the difference is reflected as a gain or loss. New Zealand set up a fund to partially prefund future superannuation expenses. This fund reports gains and losses on its investments. Because the current government wishes to retain the investment returns in the fund, beginning with the 2007 budget the government has shifted its focus to the OBEGAL to ensure the government is meeting its fiscal objectives. New Zealand said that by excluding net gains and losses the OBEGAL gives a more direct indication of the underlying stewardship of the government. Accrual accounting is inherently more complex than cash-based accounting, which is like managing a checkbook. One Australian official noted that using accrual measures can be challenging because many cabinet ministers and members of Parliament are trained in professional fields other than finance and accounting and may be more familiar with cash budgeting. Focusing on accrual-based numbers can be difficult given the existence of cash-based fiscal policy targets. For example, several countries—Canada, New Zealand, and the United Kingdom—have fiscal policy targets that target the amount the country can borrow; borrowing (or debt) is based on cash measures. Also, while accrual numbers are used at the agency level in Australia, Australia has had a goal of running cash-based surpluses over the business cycle. This is due in part to a long-standing goal in Australia to improve national savings. At the time of our study, Australia’s Treasurer primarily focused on the cash-based fiscal position to show the government’s effect on national savings. Agency managers therefore have an obligation to manage both the cash and accrual implications of their resource use. New Zealand also pays attention to its cash position. New Zealand’s current fiscal policy goal is to maintain gross debt at around 20 percent of GDP. This means that New Zealand’s cash position must be such that cash receipts equal cash outlays excluding interest expense. It also means the accrual-based operating surplus must be sufficient to cover investments— cash needed today but not expensed until the future. Cash information is still used at both the overall fiscal policy level and department level in the United Kingdom. The current United Kingdom government has pledged to avoid borrowing to finance current expenditures and maintain public debt at a prudent level. Both of the government’s fiscal targets are measured on a near-cash basis. Consequently, United Kingdom Treasury officials said that Treasury has imposed limits on departmental cash spending because spending directly affects the country’s cash-based fiscal position. Different countries have taken different approaches to managing noncash expenses, particularly in regard to capital assets. In Australia and New Zealand, cash is appropriated for the full accrual amounts, including noncash items such as depreciation for existing assets. Agencies are expected to replenish their current assets from funding provided for depreciation and they have the funding to do so (subject to the oversight discussed below). The full cost of government is the focus of the operating budget rather than the immediate cash requirement. The downside of this approach is that control of cash and capital acquisitions to replace assets can become challenging. If an agency is given cash to fund depreciation expense, there is a risk that agencies may use the funds to cover other expenses. Similarly, Parliament may lose control over the acquisition of capital assets since it will have funded them through depreciation provided in previous years. To address these concerns, countries have implemented cash management policies and specific controls over capital acquisitions. For example, like Australia and New Zealand, the United Kingdom initially provided funding for the full cost of programs, outputs, or outcomes with the thought that it would generate efficiencies. Over time, however, United Kingdom Treasury officials said they became concerned that some departments were shifting noncash expenses to cash expenses, which adversely affected the government’s borrowing requirement. As a result, the United Kingdom has imposed controls on cash. Departments’ budgets now include both the amount of the full accrual costs and the cash required. The Parliament approves both numbers. This not only helps ensure that department spending is in-line with the government’s fiscal policy goals but also reinforces Parliament’s control over capital acquisitions. Australia also reported that it is considering a model that would give the Parliament both cash and accrual information in a form that better meets its needs and preferences. On the basis of reports by the Australian National Audit Office and others that departments could potentially use funds provided for depreciation of existing assets to fund noncapital acquisitions or that agencies are not appropriately using the funds to repair or replace existing assets, the Australian Senate expressed concern about the transparency of funding for depreciation and the potential loss of control over new capital purchases. The Senate recommended that the government consider reporting and budgeting for capital expenditures separately, including a subdivision of expenditures between asset replacement (i.e., the depreciation component) and asset expansion. All countries we reviewed that accrue capital investments have a process in place to facilitate oversight over capital. While most of these countries include depreciation of existing assets in operating budgets, most also preserve up-front control of capital by approving capital purchases above a certain threshold. For example, in New Zealand, all capital purchases above NZ$15 million must be approved by the cabinet. In Australia, any capital purchase above A$10 million in any one year must have a business case prepared and must be included in the budget proposal to be submitted for government approval. The United Kingdom Treasury reviews departmental capital plans. In the Netherlands, capital purchases by agencies are made through loans provided by the Ministry of Finance. The Ministry of Finance has to approve the level of loans per agency. As previously noted, all of the countries in our study are parliamentary systems in which the political party that controls the current government has primary control over budgetary matters. However, as noted above, in some countries Parliaments have expressed general concerns that the budget presentations are confusing under accrual budgeting. Several countries in our study use more than one method of budget accounting, which can be confusing for Parliament and other users. In Australia, for example, where two accounting standards are currently used in the budget, the Senate has recommended the adoption of a single agreed-upon accounting standard. In Canada, the government reports the budget surplus/deficit on an accrual basis but department-level appropriations remain on a cash basis. Canadian audit officials we spoke with said the Parliament wants the department-level appropriations prepared on an accrual basis in part because the two different measures and crosswalks are confusing. Canada is considering moving department-level budgets to an accrual basis in order to provide consistent financial information for all levels of government and a better linkage between the budget and appropriations. In the United Kingdom, some members of Parliament said it was unclear how the accrual-based appropriations related to the nation’s fiscal goals, which are largely cash based. As a result, the government is undertaking an “alignment project” to better align budget accounts with the government’s two fiscal rules to (1) avoid borrowing to finance current expenditures and (2) keep net debt at prudent levels. Australia’s Senate expressed concern about reduced transparency of some information and said that the budget could be improved if data were presented at the program level (in addition to outcomes). The Australian government official we spoke with said that the government already provides the Parliament and public with extensive information on both the full costs of government activities and the performance of agencies. It was not clear to the official, however, that providing more detailed information would improve the quality and usefulness of information considering the administrative workload involved and the potential for creating more “red tape” for managers. The Australian official thought more concise and relevant reports might be more useful than more information. Despite the inherent challenges, our six case study countries have continued to use accrual budgeting and additional countries have adopted accrual budgeting since 2000. These countries view having accrual-based cost information available to program managers for resource allocation decisions as outweighing the associated difficulties. In several countries, officials we spoke with said they believe accrual budgeting provides better information on the cost of annual operations and performance than cash- based budgeting particularly in regard to the use of capital assets and programs that incur costs that are not paid in cash today. In general, countries said that accrual-based cost information contributes to improved resource allocation and program management decisions. Under cash budgeting, a program’s budget shows only the immediate cash outlay and not the cash that will have to be paid in the future for the service provided today. Accrual budgeting, which recognizes resources as they are used to produce goods and services, provides the full cost of all programs and may allow for better comparisons between different methods of delivering government services. New Zealand officials, in particular, believe the cost information provided by accrual-based budgeting has led to efficiencies and better resource allocation decisions. New Zealand attributed the cost information provided by accrual budgeting as helping them identify where and how to cut spending to put the country on a more sound fiscal footing in the early 1990s. Several of the countries have attributed specific improvements on the departmental level to accrual budgeting. For example, under accrual accounting, the cost of a loan includes the subsidy cost—the cost of lending below market rates and provisions for bad debt. When New Zealand recently made student loans interest free, the cost of the subsidy was taken into consideration during the policy debate. The United Kingdom also reported the more complete information on student loans directly affects lending decisions at the Department of Education and Employment. In several of the countries, one perceived advantage of accruals was to facilitate comparisons between the public sector and private sector. Accrual-based cost estimates could be used to “benchmark,” or compare the cost of existing public service providers to alternative providers in either the public or private sectors. The OECD reported in 2005 that both agencies and core ministries in the Netherlands were content with the results from accrual budgeting at the agencies. Agencies, which now receive a budget for the full cost of their activities, like the flexibilities under accrual budgeting, while core ministries value the output and price information they receive from the agencies. The ministries also reported that agencies’ use of accrual budgeting enables them to consider the performance of the agencies relative to alternatives (i.e., decentralization to subnational government or contracting out). At the same time, the availability of the alternatives enabled ministries to put more pressure on agencies to improve cost efficiency and to reduce prices. New Zealand, however, reported that there is little evidence available that similar types of outputs are compared or benchmarked in a way that was thought desirable at the time the reforms were initiated. Concerns about the usefulness and robustness of cost accounting systems continue and there remains a concern that the specification of outputs is not at a sufficient standard to ensure high-quality government performance. In several case study countries, accrual budgeting helped policymakers recognize the full cost of certain programs at an earlier point and make decisions that limited future cash requirements. For example, as reported in 2000, both New Zealand and Iceland credited accrual budgeting with highlighting the longer-term budgetary consequences associated with public sector employee pension programs. In Iceland, accrual budgeting showed the consequences of wage negotiations on future public sector employee pension outlays. The full costs of these agreements were not fully realized by the public until the adoption of accrual budgeting. At that time, Icelandic officials told us that there was no longer public support for decisions that were so costly in the long term. Similarly, New Zealand officials decided to discontinue the defined benefit public employee pension program after pension liabilities were recognized on the balance sheet and the expense incurred was included in the budget. Since 2000, reforms aimed at putting government employee pensions on a more sustainable footing were enacted in Australia and the United Kingdom. In Australia, unfunded pension liabilities for government employees are currently the largest liability on Australia’s balance sheet (which is part of its budget documents). To cover this liability, the Australian government recently established an investment fund called the “Future Fund” to help pay future pension payments. Government employee pensions in the United Kingdom were also reformed. In 2007, the United Kingdom government raised the pension age to 65 for employees hired beginning in July 2007 and limited the government’s contribution to pensions to 20 percent. United Kingdom officials acknowledged that there was already recognition that the program needed significant reform before the introduction of accrual measures, but said accrual budgeting helped highlight the full cost of pension liabilities and forced the debate on pension reform to happen sooner. Accrual budgeting has also changed the information available for insurance programs, veterans benefits, and environmental liabilities. As reported in 2000, New Zealand officials attributed reforms of the Accident Compensation Corporation program to recognizing the liability and expenses from providing accident coverage in the budget. Recognizing the estimated future outlays associated with current accidents reduced budget surpluses by NZ$500 million. At that time, officials attributed New Zealand’s decision to raise premiums and add surcharges largely to this inclusion of program costs in the budget. Also, in 2002 New Zealand ratified the Kyoto Protocols committing to reduce net emissions of greenhouse gases over the 2008–2012 period. Consistent with financial accounting standards, New Zealand recognized a liability for the obligation created by this commitment. New Zealand officials attributed accrual accounting with helping them focus on ways to manage environmental liabilities. Canadian officials attributed accrual information with leading to recent changes in veterans benefits. The use of accrual accounting requires Veterans Affairs Canada to record the full cost of veteran benefits in the year they are earned rather than paid. Therefore when considering changes to veterans benefits, Veterans Affairs Canada considered the effect of future cash flows in discounted terms. Initial results indicated that the planned changes to veteran benefits represented a substantial expense for the year. As a result, Veterans Affairs Canada modified the admissibility requirements limiting the financial effect of the changes. Accrual budgeting was not used to increase awareness of long-term fiscal challenges that are primarily driven by old-age public pensions and healthcare programs. None of the countries in our study include future social insurance payments in the budget. Like the United States, the other countries do not consider future social insurance payments to be liabilities. Instead, in recent years, several countries have begun reporting on the sustainability of the government’s overall finances over longer-term horizons, given demographic and fiscal trends. Aging is a worldwide phenomenon. One of the key challenges that all developed economies are facing over the coming decades is demographic change. This demographic shift—driven by increased life expectancies, falling fertility rates, and the retirement of the baby boom generation—will place increased pressure on government budgets (i.e., public pensions and health care). For example, by 2047, a quarter of Australia’s population is projected to be aged 65 and over—nearly double the current proportion. Similarly, by 2050, New Zealand projects that the number of people over 65 is expected to grow almost threefold, while those 85 and over will grow sixfold. Similar trends hold for the other countries we studied. Although public pension benefits are a major driver, the most challenging aspect of the long-term fiscal outlook in many of the countries we studied—as in the United States—is health care spending. Health spending is expected to increase significantly over the next 40 years due to population aging, new medical technologies, new drugs, and other factors. For example, Australia projects that health care spending as a share of GDP will nearly double by 2046–2047. Similarly, the United Kingdom projects that its health spending will increase faster than other types of spending—from around 7½ percent of GDP in 2005–2006 to around 10 percent of GDP by 2055–2056. New Zealand projects a rise in the ratio of health spending to GDP of 6.6 percentage points between 2005 and 2050 resulting in health spending of about 12 percent of GDP. Similar trends are projected in the other countries we reviewed. In recent years, many countries in our study have started preparing long- term fiscal sustainability reports. Frequently cited reasons for this are to improve fiscal transparency and provide supplemental information to the budget; to increase public awareness and understanding of the long-term fiscal outlook; to stimulate public and policy debates; and to help policymakers make informed decisions. These reports go beyond the effects of individual pension and health care programs to show the effect of these programs on the government budget as a whole. Unlike accrual or cash budgeting, which are intended to provide annual cost information, fiscal sustainability reporting provides a framework for understanding the government’s long-term fiscal condition, including the interaction of federal programs, and whether the government’s current programs and policies are sustainable. In fiscal sustainability reports, countries measure both the effect of current policy on the government’s fiscal condition and the extent of policy changes necessary to achieve a desired level of sustainability. These countries hope that a greater understanding of the profound changes they will experience in the decades ahead will help stimulate policy debates and public discussions that will assist them in making fiscally sound decisions for current and future generations and in achieving high and stable rates of long-term economic growth. Fiscal sustainability is generally described by countries as the government’s ability to manage its finances so it can meet its spending commitments now and in the future. A sustainable fiscal policy would encourage investment and allow for stable economic growth so that future generations would not bear a tax or debt burden for services provided to the current generation. An unsustainable condition exists when demographic and other factors are projected to place significant pressures on future generations and government finances over the long term and result in a growing imbalance between revenues and expenditures. Four of six case study countries produce reports on long-term (i.e., more than 10 years) fiscal sustainability. The Netherlands first issued a report on the long term in 2000. Both the United Kingdom and Australia followed, issuing their first reports in 2002. New Zealand issued its first report in 2006. Of our case study countries, only Canada and Iceland currently do not issue long-term fiscal sustainability reports. However, Canada is planning to issue a comprehensive fiscal sustainability and intergenerational report in the near future. Of our limited review countries, Norway reported that it has traditionally provided Parliament reports on long-term budget projections as well as fiscal sustainability analyses. Further, Switzerland is planning to issue a long-term fiscal sustainability report in early 2008. The European Commission is also increasing its focus on the fiscal sustainability of the EU member states, including the Netherlands, United Kingdom, Denmark, and Sweden, as part of the Stability and Growth Pact (SGP). The SGP, an agreement by EU member states on how to conduct, facilitate, and maintain their Economic and Monetary Union requirements, requires member states to submit Stability or Convergence Reports, which are used by the European Council to survey and assess the member’s public finances. The guidelines for the content of these reports were changed in 2005 to include a chapter with long-term projections of public finances and information on the country’s strategies to ensure the sustainability of public finances. The European Commission uses this information to annually assess and report on the long-term sustainability of all EU members, including consideration of quantitative measures (e.g., primary balance, debt-to-GDP) and qualitative considerations of other factors, such as structural reforms undertaken and reliability of the projections. Such reporting includes an assessment of the sustainability of member countries’ finances, policy guidance to EU members to improve sustainability, and discussion of the effect of significant policy changes on the sustainability of member countries’ finances. The Commission released its first comprehensive assessment on the long-term sustainability of public finances in October 2006. Whether a government will be able to meet its commitments when they arise in the future may depend on how well it reduces its debt today so the burden does not fall entirely to future generations. Countries may have different assumptions about what is sustainable but one aim is to keep debt at “prudent levels.” Several of our case study countries have set debt- to-GDP targets in their efforts to address fiscal sustainability issues. For example, Canada wants to reduce its net debt (i.e., financial liabilities less financial assets) for all levels of government to zero by 2021. Similarly, New Zealand’s current objective is to reduce debt to around 20 percent of GDP over the next decade. The United Kingdom, under its sustainable investment rule, requires that public sector net debt is to be maintained below 40 percent of GDP over the economic cycle. Australia and the Netherlands have no explicit debt level targets, although the Netherlands is subject to EU limits on general government debt. The countries studied used a number of measures to assess the fiscal sustainability of their policies. Common approaches to assessing fiscal sustainability include cash-flow measures of revenue and spending and public debt as a percent of GDP as well as summary measures of fiscal imbalance and fiscal gap (see table 2). Each measure provides a different perspective on the nation’s long-term financing. Cash-flow measures are useful for showing the timing of the problem and the key drivers, while measures such as the fiscal imbalance or fiscal gap are useful for showing the size of action needed to achieve fiscal sustainability. Each measure has limitations by itself and presents an incomplete picture. Therefore, most countries use more than one measure to assess fiscal sustainability. Two measures—the fiscal gap and fiscal imbalance—show the size of the problem in terms of action needed to meet a particular budget constraint. Changes in these measures over time are useful for showing improvement or deterioration in the overall fiscal condition. The fiscal gap shows the change in revenue or noninterest spending needed immediately and maintained every year to achieve a particular debt target at some point in the future. The fiscal imbalance (or intertemporal budget constraint) is similar to the fiscal gap but the calculation assumes all current debt is paid off by the end of the period. These summary measures can also be calculated in terms of the adjustment needed in the future if adjustment is delayed (which would increase its size). The change in policy can be in the form of adjustments to taxes, spending, or both. A positive fiscal gap or imbalance implies that fiscal policy should be tightened (i.e., spending cut or taxes raised) while a negative fiscal gap or imbalance implies that fiscal policy could be loosened (i.e., spending increased or taxes reduced). A fiscal gap or imbalance implies potential harm to future generations if action to make public finances sustainable is deferred thus requiring more budgetary actions (or higher interest costs) in the future than today. It should be noted that a fiscal gap or imbalance of zero over a finite period does not mean that current fiscal policy is sustainable forever. For example, debt could still be rising faster than GDP at the end of the period. Another limitation to these summary measures is that by definition they do not provide information on timing of receipts and outlays, which is important. Most of the countries we studied used share of GDP measures rather than present value dollar measures. In part this is to avoid the situation in which a small change in the discount rate assumption leads to large swings in the dollar-based sustainability measures. Present value dollar measures are highly sensitive to assumptions about the discount rate. An increase of 0.5 percentage points in the discount rate used to calculate the U.S. fiscal gap reduces the present value of the fiscal gap from $54.3 trillion to $47.7 trillion; in contrast such a change results in a smaller proportional change to the gap as a share of GDP from 7.5 to 7.3 percent. Also, since the numbers can be so large, it may be difficult for policymakers and the general public to understand without placing the numbers in context of the resources available in the economy to finance the fiscal gap. Fiscal sustainability reports are required by law in two countries— Australia and New Zealand. The legislation underpinning both countries’ fiscal sustainability reports does not dictate in detail what measures should be included in the report. Rather, the law specifies only the frequency of reporting (i.e., every 4 years for New Zealand and every 5 years for Australia), the years to be covered, and the overall goal. Both Australia and New Zealand are required to assess the long-term sustainability of government finances over a 40-year horizon. Switzerland is required by law and an accompanying regulation to issue a sustainability report periodically, but at least every 4 years. Neither the Netherlands’ nor the United Kingdom’s reports are required by law. Instead, the reports stem from political commitments of the current government. The Netherlands prepared its first report in 2000 and reported again in 2006. In the United Kingdom the current government made a political commitment to annually report on the long-term fiscal challenges as part of the current government’s fiscal framework and has prepared reports annually since 2002. Canada’s upcoming report also stems from a commitment made by the current government. A drawback of not having any legal or legislative requirement for the report is that future governments may or may not continue what the current government started. The size of a nation’s fiscal gap or fiscal imbalance will depend on the time period chosen. Even if a particular sustainability condition is satisfied over the chosen period, there may still be fiscal challenges further out. Extending the time period can partially address this limitation, but it increases uncertainty. Most of the case study countries that prepare fiscal sustainability reports cover the next 40 to 50 years. However, the Netherlands report goes out through 2100. The United Kingdom calculates the intertemporal budget constraint over an infinite time horizon, which poses a high degree of uncertainty. Choosing the horizon for the fiscal gap or imbalance calculations therefore involves a trade-off in that it should be long enough to capture all the major future budgetary developments but also short enough to minimize uncertainty. It may be best to present these measures over a range of horizons. As with any long-term projection, uncertainty is an issue. To deal with the uncertainty of projections, countries have done sensitivity analysis. For example, the United Kingdom performed a sensitivity analysis using different assumptions for productivity growth and interest rates. The United Kingdom found that the fiscal gap was robust to changes in productivity growth, meaning that the required policy action changed little. However, the fiscal gap was more sensitive to changes in the interest rate assumption. For example, in the United Kingdom, an increase in the interest rate assumption from 2.5 percent to 3.0 percent increases the fiscal gap for the 50-year period by 50 percent from 0.5 percent to 0.75 percent of GDP. Sustainability requirements are important when setting short- and medium-term policy targets. The sooner countries act to put their governments on a more sustainable footing, the better. Acting sooner rather than later permits changes to be phased in more gradually and gives those affected time to adjust to the changes. Citizens can adjust their savings now to prepare for retirement. In the Netherlands, a medium-term fiscal target has been set based on the information presented in the sustainability report. The current government has explicitly linked expenditure ceilings and revenue targets to attaining a structural fiscal surplus of 1 percent of GDP at the end of 2011, which the Netherlands Bureau of Economic Policy Analysis has estimated is needed for public finances to be sustainable given the impending population aging. In addition a study group recommended that the adjustments should be introduced gradually so that they are bearable for all generations. According to New Zealand officials, its fiscal sustainability report shows that long-term demographic pressures will make it increasingly hard to meet fiscal objectives and therefore policy adjustments will be required. Recognizing that small changes made now will help to prevent making big changes later on, officials said the report has encouraged and enabled greater consideration of long-term implications of new policy initiatives in the budget process. New Zealand intends to link departments’ annual Statements of Intent to long-term projections. Under this approach, departmental objectives will have to be modified or justified to meet the long-term objectives. Before implementing accrual budgeting some countries were experiencing moderate to large deficits. Some countries’ dependence on trade and foreign borrowing led to concerns that increased deficits could lead to rising interest rates and devaluation of the currency, and ultimately a financial crisis. As a result, fiscal discipline was necessary. Accrual budgeting was adopted as part of larger reforms to improve transparency, accountability, and government performance. The United States faces long-term fiscal challenges that, absent reforms, could have adverse effects in the form of higher interest rates, reduced investment, and more expensive imports ultimately threatening our nation’s well-being. The range of approaches used by countries in our study illustrate that accrual budgeting need not be viewed as a “one size fits all” choice. The experiences of countries in our study show that the switch to accrual budgeting was most beneficial for programs where cash- or obligations- based accounting did not recognize the full program cost up front. As we stated in 2000 and in other GAO reports, increased accrual information in certain areas of the budget—insurance, environmental liabilities, and federal employee pensions and retiree health—can help the Congress and the President better recognize the long-term budgetary consequences of today’s operations and help prevent these areas from becoming long- term issues. However, accrual budgeting raises significant challenges for the management and oversight of capital purchases and noncash expenses, especially depreciation. Many of our case study countries implemented additional controls to maintain up-front control over resources within their accrual budget frameworks. Indeed, in the U.S. system of government where the Congress has the “power of the purse,” maintaining control over resources is important. While cost and performance information provided under accrual budgeting can be useful, this information must be reliable if budget decisions are to be based on it. We have reported that the financial management systems at the majority of federal agencies are still unable routinely to produce reliable, useful, and timely financial information. Until there is better financial information, a switch to full accrual budgeting may be premature. As we reported in a previous report on U.S. agencies’ efforts to restructure their budgets to better capture the full cost of performance, the use of full-cost information in budget decisions may reflect rather than drive the development of good cost information in government. Further, challenges exist in estimating accrual-based cost information for some areas, including veterans compensation, federal employee pensions and retiree health, insurance, and environmental liabilities, that require a significant amount of the government’s future cash resources. For example, estimates of future outlays for pensions or veterans compensation depend on assumptions of future wages, inflation, and interest rates that are inherently uncertain and subject to volatility. Trends in health care costs and utilization underlying estimates of federal employee postretirement health benefits have also been volatile. The estimated cleanup costs of the government’s hazardous waste are another area where the accrued expenses may not be based on reliable estimates. Not all environmental liabilities have been identified and cleanup and disposal technologies are not currently available for all sites. However, in areas such as these, it may be preferable to be approximately right than exactly wrong. Failure to pay attention to programs that require future cash resources can further mortgage our children’s future. Although accrual budgeting can provide more information about annual operations that require future cash resources, it does not provide sufficient information to understand broader long-term fiscal sustainability. An accrual budget does not include costs associated with future government operations and thus would not help recognize some of our greatest long-term fiscal challenges—related to Social Security, Medicare, and Medicaid. A growing trend in other countries is to develop reports on fiscal sustainability that evaluate the fiscal condition of not only the key drivers of the nation’s long-term fiscal outlook but government as a whole. Fiscal sustainability reports that show future revenue and outlays for social insurance programs and the interrelationship of these programs with all federal government programs would provide a comprehensive analysis of the nation’s fiscal path and the extent to which future budgetary resources would be sufficient to sustain public services and meet obligations as they come due. By highlighting the trade-offs between all federal programs competing for federal resources, such a report would improve policymakers’ understanding of the tough choices that will have to be made to ensure future generations do not bear an unfair tax or debt burden for services provided to current generations. Most countries recognize the need for various measures of fiscal position, including the projected debt-to-GDP ratios and fiscal gap measures. Since no single measure or concept can provide policymakers with all the information necessary to make prudent fiscal policy decisions, it is necessary to use a range of measures or concepts that show both the size of the problem and the timing of when action is needed. This study and the deterioration of the nation’s financial condition and fiscal outlook since 2000 confirm our view that the Congress should consider requiring increased information on the long-term budget implications of current and proposed policies on both the spending and tax sides of the budget. In addition, the selective use of accrual budgeting for programs that require future cash resources related to services provided during the year would provide increased information and incentives to manage these long-term commitments. While the countries in our study have found accrual-based information useful for improving managerial decision making, many continue to use cash-based information for broad fiscal policy decisions. This suggests that accrual measures may be useful supplements rather than substitutes of our current cash- and obligations-based budget. Presenting accrual information alongside cash- based budget numbers, particularly in areas where it would enhance up- front control of budgetary resources would put programs on a more level playing field and be useful to policymakers both when debating current programs and when considering new legislation. Since accrual-based budgeting would not provide policymakers with information about our nation’s largest fiscal challenges—Social Security, Medicare, and Medicaid—fiscal sustainability reporting could help fill this void. The reports could include both long-term cash-flow projections and summary fiscal gap measures for the whole of government that would show both the timing and overall size of the nation’s fiscal challenges. Accrual budgeting and fiscal sustainability reporting are only means to an end; neither can change decisions in and of itself. The change in measurement used in the budget provides policymakers and program managers with different information, but the political values and instincts of policymakers may not change. While recognizing fuller costs could help inform policymakers of the need to reform, it will require action on their part to address them. Any expansion of accrual-based concepts in the budget or increased reporting requirements would need to be accompanied by a commitment to fiscal discipline and political will. To increase awareness and understanding of the long-term budgetary implications of current and proposed policies for the budget, the Congress should require increased information on major tax and spending proposals. In addition, the Congress should consider requiring increased reporting of accrual-based cost information alongside cash-based budget numbers for both existing and proposed programs where accrual-based cost information includes significant future cash resource requirements that are not yet reflected in the cash-based budget. Such programs include veterans compensation, federal employee pensions and retiree health, insurance, and environmental liabilities. To ensure that the information affects incentives and budgetary decisions, the Congress could explore further use of accrual-based budgeting for these programs. Regardless of what is decided about the information and incentives for individual programs, the Congress should require periodic reports on fiscal sustainability for the government as a whole. Such reports would help increase awareness of the longer-term fiscal challenges facing the nation in light of our aging population and rising health care costs as well as the range of federal responsibilities, programs, and activities that may explicitly or implicitly commit the government to future spending. We are sending copies of this report to interested parties. Copies will also be sent to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact Susan Irving at (202) 512-9142 or [email protected] if you have any questions about this report. Key contributors are listed in appendix II. To update the findings of our 2000 report, we examined (1) where, how, and why accrual budgeting is used in select Organisation for Economic Co-operation and Development (OECD) countries and how it has changed since 2000; (2) what challenges and limitations were discovered and how select OECD countries responded to them; (3) what select OECD countries perceived the effect to have been on policy debates, program management, and the allocation of resources; (4) whether accrual budgeting has been used to increase awareness of long-term fiscal challenges and, if not, what is used instead; and (5) what the experience of select OECD countries and other GAO work tell us about where and how the increased use of accrual concepts in the budget would be useful and ways to increase the recognition of long-term budgetary implications of policy decisions. To address these objectives, we primarily focused on the six countries in the 2000 GAO report: Australia, Canada, Iceland, the United Kingdom. We also did a limited review of two other nations—Denmark and Switzerland—that have recently expanded the use of accrual measures in the budget. Since these countries may not provide a complete picture of the potential limitations or the use of alternative ways to increase the focus on long-term fiscal challenges, we also looked at two countries— Norway and Sweden—that considered expanding the use of accrual measurement in the budget but decided against it, to understand why. We reviewed budget publications and used a set of questions to gather information on how and why accrual concepts are used in the budget in the selected countries and how this has changed since 2000. For context, we also reviewed the results of a recent survey done by the OECD on budgeting practices in all OECD countries and compared to older survey results to understand general trends in the use of accrual budgeting over time. To identify factors that facilitated accrual budgeting; strategies for addressing commonly cited implementation challenges; and how and where accrual has or has not changed the budget debate, we primarily focused on the six countries studied in 2000. We interviewed (by e-mail, telephone, and videoconferencing) officials from the budget and national audit offices in select countries and reviewed official budget documents and related literature to gather information on the challenges and limitations of accrual budgeting; how the use of accruals in the budget has affected policy debates, resource allocation decisions, and program management; and other approaches used to address long-term fiscal challenges. We did not interview parliamentary officials or staff or program managers. The information on foreign laws in this report does not reflect our independent legal analysis, but is based on interviews and secondary sources. We identified key themes from the experience of other nations, reviewed past GAO work, and considered the differences between other nations and the United States to identify useful insights about how to use more accrual-based or other information to inform budget debates. The experience of any one OECD country is not generalizable to other countries. In analyzing other countries’ experiences and identifying useful insights for the United States, it is important to consider the constitutional differences between Parliament in parliamentary systems of government and the Congress of the United States, especially in the role each legislature plays in the national budget process. The U.S. Congress is an independent and separate, but coequal, branch of the national government with the constitutional prerogative to control federal spending and resource allocation. Many important decisions that are debated during the annual budget and appropriations process in the United States occur in case study countries before the budget is presented to Parliament for approval. Also, most case study countries generally deal with the approval of obligations through agency or bureaucratic controls whereas in the United States congressional approval (i.e., “budget authority”) is required before federal agencies can obligate funds. Further, most case study countries used purely cash reporting for budgeting before adopting accrual budgeting. In contrast, the United States’ obligation-based budgeting already captures many obligations not apparent in a purely cash system. These differences are likely to influence perspectives on the trade- offs associated with the use of accrual budgeting, particularly in terms of accountability and legislative control. Key contributors to this assignment were Jay McTigue, Assistant Director; Melissa Wolf, Analyst-in-Charge; Michael O’Neill; and Margit Willems Whitaker. | The federal government's financial condition and fiscal outlook have deteriorated dramatically since 2000. The federal budget has gone from surplus to deficit and the nation's major reported long-term fiscal exposures--a wide range of programs, responsibilities, and activities that either explicitly or implicitly commit the government to future spending--have more than doubled. Current budget processes and measurements do not fully recognize these fiscal exposures until payments are made. Increased information and better incentives to address the long-term consequences of today's policy decisions can help put our nation on a more sound fiscal footing. Given its interest in accurate and timely information on the U.S. fiscal condition, the Senate Committee on the Budget asked us to update our study of other nations' experiences with accrual budgeting and look at other ways countries have increased attention to their long-term fiscal challenges. In 2000, GAO reviewed the use of accrual budgeting--or the recording of budgetary costs based on financial accounting concepts--in Australia, Canada, Iceland, the Netherlands, New Zealand, and the United Kingdom. These countries had adopted accrual budgeting more to increase transparency and improve government performance than to increase awareness of long-term fiscal challenges. Accrual budgeting continues to be used in all six countries; Canada and the Netherlands, which use accrual information selectively, considered expanding the use of accruals but thus far have made only limited changes. Since 2000, other countries have considered using accrual budgeting. For example, Denmark and Switzerland began using accrual budgeting on a selective basis. Norway and Sweden, however, rejected accrual budgeting primarily because they believed cash budgeting enables better control over resources. Countries have taken different approaches in the design of their accrual budgets. Regardless of the approach taken, cash information remains important in all the countries for evaluating the government's finances. Other countries' experiences show that accrual budgeting can be useful for recognizing the full costs of certain programs, such as public employee pensions and retiree health, insurance, veterans benefits, and environmental liabilities, that will require future cash resources. However, these other countries do not use accrual budgeting to recognize their long-term fiscal challenges that are primarily driven by public health care and pension programs. Instead, many countries in GAO's study have begun preparing fiscal sustainability reports to help assess these programs in the context of overall sustainability of government finances. European Union members also annually report on longer-term fiscal sustainability. Although no change in measurement or reporting can replace substantive action to meet our longer-term fiscal challenge, GAO believes that better and more complete information on both the full-cost implications of individual decisions and on fiscal sustainability of the government's finances can help. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
A number of events are important in the history of DOE’s U.S. Plutonium Disposition program. In 1994, the United States declared 38.2 metric tons of weapons-grade plutonium as surplus to national security needs. In 1997, DOE announced a plan to dispose of surplus, weapons-grade plutonium through the following dual approach: (1) conversion into MOX fuel and (2) immobilization in glass or ceramic material. According to DOE, its approach would require the construction of three facilities—a pit disassembly and conversion facility, a MOX fuel fabrication facility, and an immobilization facility. In 2000, the United States and Russia entered into a Plutonium Management and Disposition Agreement, in which each country pledged to dispose of at least 34 metric tons of surplus, weapons-grade plutonium, including the disposition of no less than 2 metric tons of plutonium per year. In 2000, DOE announced in a record of decision that it would construct a pit disassembly and conversion facility, a MOX fuel fabrication facility, and an immobilization facility at SRS. In 2002, NNSA canceled the immobilization portion of its surplus plutonium disposition strategy due to budgetary constraints. In addition, according to NNSA officials, NNSA canceled the immobilization portion because (1) Russia would not dispose of its plutonium if the United States adopted an immobilization-only approach and (2) the technology for MOX fuel fabrication had been in use in Europe for three decades, whereas immobilization of weapons-grade plutonium in glass or ceramic had never before been demonstrated. In 2003, NNSA announced that it was pursuing a MOX-only plutonium disposition program to dispose of 34 metric tons of surplus, weapons- grade plutonium. The majority of the 34 metric tons of surplus, weapons-grade plutonium is in the form of pits, clean metal, and oxides. The remainder is in nonpit forms, such as contaminated metal, oxides, and residues from the nuclear weapons production process. While NNSA plans to build a pit disassembly and conversion facility to obtain plutonium from pits, it also plans to use the ARIES project—a technology development and demonstration project for pit disassembly and conversion located at LANL—to obtain a small amount of plutonium from pits. In addition, according to NNSA documents, NNSA plans to obtain plutonium from nonpit forms in two ways. First, the K-Area Facility at SRS is storing 4.1 metric tons of plutonium in nonpit form that is already suitable for use by the MFFF. Second, NNSA plans to prepare and process additional quantities of plutonium (3.7 metric tons) already at the K-Area Facility or planned for storage at the facility. Prior work by GAO has identified persistent problems with cost overruns and schedule delays on the PDCF project. For example, in our March 2007 report on major DOE construction projects, we found that ineffective DOE project oversight, poor contractor management, and external factors were among the primary reasons for the cost increases and schedule delays associated with the PDCF project. In addition, according to a May 2005 DOE Inspector General report, NNSA officials attributed schedule delays for the PDCF to the disagreement between the United States and Russia about liability for work performed by U.S. contractor personnel working in Russia and a change in funding priorities. NNSA project directors are responsible for managing the MFFF, WSB, and PDCF projects and overseeing the contractors that design and construct these facilities. In doing so, project directors follow specific DOE directives, policies, and guidance for project management. Among these is DOE Order 413.3A, which establishes protocols for planning and executing a project. The protocols require DOE projects to go through a series of five critical decisions as they enter each new phase of work. These decisions are as follows: Critical decision 0, which approves a mission-related need. Critical decision 1, which approves the selection of a preferred solution to meet a mission need and a preliminary estimate of project costs—an approval that is based on a review of a project’s conceptual design. Critical decision 2, which approves that a project’s cost and schedule estimates are accurate and complete—an approval that is based on a review of the project’s completed preliminary design. Critical decision 3, which reaches agreement that a project’s final design is sufficiently complete and that resources can be committed toward procurement and construction. Critical decision 4, which approves that a project has met its performance criteria for completion or that the facility is ready to start operations. To oversee projects and approve these critical decisions, DOE conducts its own reviews, often with the help of independent technical experts. For example, for large projects (with a total project cost of greater than $100 million), DOE’s Office of Engineering and Construction Management (OECM) validates the accuracy and completeness of the project’s performance baseline as part of the critical decision 2 process. DOE Order 413.3A also requires projects to use EVM to measure and report the progress of construction projects (with a total project cost of greater than or equal to $20 million). EVM measures the value of work accomplished in a given period and compares it with the planned value of work scheduled for that period and with the actual cost of work accomplished. Differences in these values are measured in both cost and schedule variances. EVM provides information that is necessary for understanding the health of a program and provides an objective view of program status. As a result, EVM can alert program managers to potential problems sooner than expenditures alone can, thereby reducing the chance and magnitude of cost overruns and schedule delays. The following DOE offices and entities provide independent nuclear safety oversight: HSS is responsible for policy development, independent oversight, enforcement, and assistance in the areas of health, safety, the environment, and security across DOE. Among its functions are periodic appraisals of the environmental, safety, and health programs at DOE sites, including evaluation of a sample of high-hazard nuclear facility at these sites to determine whether the program offices and their contractors are complying with DOE policies. The NNSA Central Technical Authority is responsible for maintaining operational awareness of nuclear safety on NNSA projects, especially with respect to complex, high-hazard nuclear operations, and ensuring that DOE’s nuclear safety policies and requirements are implemented adequately and properly. The CDNS is responsible for evaluating nuclear safety issues and providing expert advice to the Central Technical Authority and other senior NNSA officials. In particular, the CDNS is responsible for (1) validating that efforts to integrate safety into a project’s design include the use of a system engineering approach, (2) determining that nuclear facilities have incorporated the concept of defense-in-depth into the facility design process, and (3) validating that federal personnel assigned to an integrated project team as nuclear safety experts are appropriately qualified. Finally, DOE considers assessments and recommendations from external organizations, most prominently the Defense Nuclear Facilities Safety Board—an independent, external organization that reviews nuclear safety issues at DOE defense facilities and makes nonbinding recommendations to DOE. The MFFF and WSB construction projects both appear to be meeting their cost targets, but the MFFF project has experienced some delays over the past 2 years. In accordance with DOE project management requirements, both projects are using EVM to measure and report progress against their established cost and schedule estimates (also known as performance baselines) for construction. EVM provides a proven means for measuring such progress and thereby identifying potential cost overruns and schedule delays early, when their impact can be minimized. Differences from the performance baseline are measured in both cost and schedule variances. Positive variances indicate that activities are costing less or are completed ahead of schedule. Negative variances indicate that activities are costing more or are falling behind schedule. These cost and schedule variances can then be used in estimating the cost and time needed to complete the project. Figure 1 presents information on both cumulative cost and schedule variances for the MFFF project over the 2-year period ending November 2009. With respect to cost, the MFFF project has experienced fluctuating variances during this period. Overall, these cost variances are relatively small compared with the project’s average monthly expenditures of over $20 million. In addition, it is normal for variances to fluctuate during the course of a project. However, with respect to the project’s schedule, the MFFF project has experienced consistently negative variances for most of the past 2 years. Specifically, as shown in figure 1, these schedule variances were consistently negative for most of 2008, and, for much of 2009, the project had not completed almost $40 million in scheduled work. According to the data and project officials, delays during 2008 were due primarily to the delivery of reinforcing bars that did not meet nuclear quality standards. Specifically, in February 2008, NRC inspectors identified numerous pieces of reinforcing bars—steel rods that are used in reinforced concrete—that did not meet industry standards for nuclear facilities. At that point, NNSA’s contractor, Shaw AREVA MOX Services, LLC (MOX Services), had accepted delivery of about 10,000 tons of reinforcing bars on-site and had installed almost 4,000 tons. Although NRC and MOX Services officials determined that the error did not affect the safety of reinforcing bars already installed, this issue had a major effect on the overall schedule for pouring concrete and installing reinforcing bars in the structure during 2008. According to project officials, the project switched to a different supplier of reinforcing bars in September 2008 and by April 2009 had a sufficient supply of material to support the construction schedule. Schedule delays in 2009 occurred primarily because project officials decided that they had not allocated sufficient time in the existing schedule to ensure the delivery of materials that would meet the stringent safety and design standards for nuclear facilities. For example, according to project officials, the project extended the amount of time needed to produce concrete for the MFFF to provide additional assurance that the concrete will meet nuclear quality standards. The rate of concrete production will be gradually increased beginning in early 2010, according to project officials. In addition, the project extended the amount of time needed to fabricate and deliver slab tanks, which are used to hold liquid fissile material, to provide additional assurance that these tanks meet stringent safety and design standards. In recent months, the MFFF project has improved its schedule performance, so that it faced roughly $25 million in uncompleted work by November 2009, compared with almost $40 million in uncompleted work earlier in the year. According to project officials, this amount of negative schedule variance is equivalent to about 2 to 3 week’s worth of work on the project, and they expect to recover from this variance during 2010. In comparison, these officials stated that the project’s schedule includes 16 month’s worth of contingency to mitigate any risks from additional delays before the expected start of MFFF operations. Figure 2 presents information on both cumulative cost and schedule variances for the WSB project over a 11-month period ending in November 2009. With respect to cost, the WSB project has experienced consistently positive cost variances. However, schedule variances have been consistently negative over the same period. By November 2009, the project had not completed over $4 million worth of scheduled work, compared with average monthly expenditures of roughly $2 million during fiscal year 2009. According to the NNSA federal project director, the schedule variances are due to a variety of factors, including delays in the procurement of cementation equipment and in the installation of piping due to inclement weather. However, the official said that he expects the project to recover from these delays, and that none of these factors will affect the overall construction schedule for the project. The reliability of a project’s EVM system depends in large part on the reliability of its underlying schedule. A reliable schedule specifies when the project’s work activities will occur, how long they will take, and how they relate to one another. We have previously identified nine key practices necessary for developing a reliable schedule. In a March 2009 testimony before this subcommittee, we identified several instances in which the MFFF project’s schedule did not adhere to these practices. In particular, we found that MFFF project staff had not conducted a risk analysis on their current schedule. However, since our March 2009 testimony, MFFF project officials have taken a number of steps to address our concerns. For example, project officials conducted a risk analysis of the MFFF project schedule in the summer of 2009 and used the results to update their risk management plan. In addition, project officials stated that they have significantly reduced the number of scheduled activities with long durations—that is, activities with start-to-finish durations of over 200 days. On the basis of these actions, we reevaluated the MFFF project’s schedule against the nine key scheduling practices. We also evaluated the WSB project’s schedule against these same practices. We found that both projects met most of the key practices to a satisfactory degree. For example, one key practice is to plan the schedule so that it can meet critical project dates. To do so, project officials must logically sequence all planned activities in the order that they are to be carried out. In particular, project officials must identify both predecessor activities—which must finish prior to the start of another activity—as well as successor activities—which cannot begin until other activities are completed. We found that the MFFF project had logically sequenced all scheduled activities, while the WSB project had logically sequenced the vast majority of its scheduled activities. For the complete results of our analysis of the projects’ schedules, see appendixes II and III. NNSA recently announced that it is considering a new alternative for its pit disassembly and conversion mission. However, due to the amount of time and effort needed to reconsider alternatives and construct a facility, as well as the amount of uncertainty associated with the agency’s new alternative, it seems unlikely that NNSA will be able to establish this capability in time to produce the plutonium oxide feedstock needed to operate the MFFF. As result of the likely delay in establishing a pit disassembly and conversion capability, NNSA may need to expand the ARIES project at LANL to provide additional interim plutonium feedstock to the MFFF. However, NNSA has not sufficiently planned for such a contingency. In addition, NNSA has not sufficiently planned for the maturation of critical technologies to be used in pit disassembly and conversion operations. In 1997, DOE decided to establish a pit disassembly and conversion capability as part of its strategy for plutonium disposition. Because about two-thirds of the plutonium slated for disposition is contained in nuclear weapon pit form, the ability to disassemble pits is critical to the success of the program. In 2000, DOE decided to construct and operate a PDCF at SRS. Through 2009, NNSA’s strategy has been to design and construct the PDCF as a new, stand-alone facility on a site adjacent to the current construction site of the MFFF. While NNSA has never established a definitive cost and schedule estimate for the PDCF project, a 2009 NNSA report estimated that the PDCF would cost $3.65 billion to construct and be operational by April 2021. However, DOE recently proposed a new alternative for establishing a pit disassembly and conversion capability at SRS. In September 2008, DOE authorized a study to review alternatives to the siting location of the PDCF capability within existing facilities at SRS and, as a result, to potentially improve its approach to disposition of surplus plutonium at SRS. Specifically, the study looked at the feasibility of combining the capabilities of the PDCF project with the Plutonium Preparation project, another project at SRS being managed by DOE’s Office of Environmental Management. The purpose of the Plutonium Preparation project, as approved by DOE in June 2008, was to prepare for disposition of up to 13 metric tons of surplus, nonpit, plutonium-bearing materials that are either at the SRS K-Area Facility or planned for storage at the facility. According to DOE’s plans, the project would be installed in the K-Area Facility and would prepare the plutonium-bearing materials for disposition via two pathways: (1) converting some of the materials into plutonium oxide feedstock for the MFFF and (2) immobilizing the rest of the materials with high-level waste in glass using the Defense Waste Processing Facility at SRS. According to DOE’s 2008 preliminary estimate, this project would be operational in the 2013-2014 time frame at a cost of $340 million to $540 million. In November 2008, DOE issued a report stating that it would be feasible to combine the two projects at the K-Area Facility. According to NNSA’s preliminary estimates, the combined project would cost about $3.65 billion and would be constructed in two phases. The first phase would include the design and installation of equipment in one area of the K-Area Facility to provide the capability (formerly associated with the Plutonium Preparation project) to process 3.7 metric tons of surplus, nonpit plutonium, which would be used as an early source of plutonium oxide feedstock to the MFFF. The second phase would include the modification of a different area within the facility and the design and installation of equipment to provide the pit disassembly and conversion capability. In December 2008, NNSA suspended many of the activities associated with the PDCF project while it performed additional analyses, and DOE suspended activities associated with the Plutonium Preparation project. Finally, in November 2009, DOE approved the “pursuing” of the combined project approach, noting several potential benefits, such as greater funding flexibility, greater flexibility regarding DOE’s secure transportation system, the avoidance of expenditures associated with constructing a new facility, and the avoidance of costs associated with decontaminating and decommissioning two Category 1 nuclear facilities, among others. However, it appears unlikely that NNSA will be able to establish a pit disassembly and conversion capability in time to produce the plutonium feedstock needed to operate the MFFF beginning in 2021, due to the amount of time and effort needed to reconsider alternatives and construct a facility as well as the amount of uncertainty associated with the agency’s new proposal. First, according to NNSA officials, they do not expect to make a decision in the near future on which approach—either the PDCF as a stand-alone facility or the K-Area Facility combination project—they will ultimately approve. Specifically, officials told us that prior to making any decision, NNSA must first select its preferred alternative as part of the DOE critical decision 1 process. To prepare for critical decision 1, NNSA will need to develop and manage numerous details, including (1) the appropriate review and documentation pursuant to the National Environmental Policy Act; (2) a transfer by the Secretary of Energy from the Office of Environmental Management to NNSA of the necessary materials, functions, and facilities to carry out the preferred alternative; and (3) issues related to federal and contractor program management, contract management, project management, and budget/financial management. As a result, NNSA officials said that they are still developing plans and schedules for the combination project and cannot provide any specific project schedule dates at this time. In addition, they stated that once NNSA makes a final decision on its strategy for pit disassembly and conversion as part of the critical decision 1 process, it will take several additional years to develop definitive cost and schedule estimates for its final approach as part of the critical decision 2 process. Second, a number of issues with NNSA’s new proposal raise doubts regarding whether the agency will be able to construct a facility in time to provide the plutonium feedstock necessary to operate the MFFF. For example: According to NNSA documents, the K-Area Facility combined project will require an aggressive, near-term acquisition strategy and project development effort to design, construct, and start a pit disassembly and conversion capability under the current time constraints. Phase 1 of the project is scheduled to be operational by 2014 to provide an early source of feedstock (from nonpit plutonium sources) to the MFFF, and phase 2 must be operational by 2021 to provide the bulk of the plutonium oxide feedstock that the MFFF will require to meet its planned production schedule. According to NNSA documents, the existing schedule for the K-Area Facility combined project is at an early stage of development and lacks any quantified schedule contingency. The project will require construction within an existing, secure, operating facility. Specifically, the project will need to excavate material from existing walls and floors in numerous locations to install piping and utilities, among other things. According to NNSA, during these excavations, the project may encounter conditions that have not been documented in existing design drawings for the K-Area Facility. Construction of a new facility, the original plan for the PDCF project, carries fewer risks of encountering unknown conditions—such as undocumented electrical wiring or other physical interfaces. The project will require substantial coordination between NNSA and the Office of Environmental Management, as well as various contractor organizations, to address competing missions and out-year issues. As a result, according to NNSA, DOE may require additional federal resources and interface agreements between its various offices to ensure the proper integration and execution of the project. NNSA’s new alternative assumes that the K-Area Facility combined project will become operational by the 6th year of MFFF operations (2021). However, if the design and construction of the project are delayed, NNSA may have to rely on the ARIES project at LANL to provide additional plutonium oxide feedstock for the MFFF. The ARIES project includes (1) laboratory facility preparation activities, (2) the acquisition of gloveboxes, (3) the design and assembly of a control system to operate the demonstration modules, (4) the preparation of all system documentation requirements, (5) the demonstration of the disassembly and conversion of all types of surplus nuclear weapon pits, (6) material control and accountability, and (7) measurements of personnel radiation exposure from all surplus pit types. LANL conducts activities associated with the ARIES project at its Plutonium Facility 4 building, which was constructed in 1978 as a multiuse plutonium research and development facility. NNSA’s current production mission for the ARIES project is to produce about 2 metric tons of plutonium oxide feedstock. Specifically, LANL is to produce 50 kilograms of plutonium oxide by the end of fiscal year 2010, ramp up to a target rate of 300 kilograms per year in fiscal year 2012, and sustain this rate through fiscal year 2017. However, this material—along with additional quantities of plutonium in nonpit form currently stored at the K-Area Facility—will only be enough for the first 5 years of the MFFF production schedule. NNSA has examined the possibility of expanding the ARIES project at LANL to provide additional plutonium oxide feedstock to the MFFF. Specifically, in May 2008, NNSA published a report that estimated NNSA might need as much as 12 metric tons of plutonium oxide feedstock to bridge a time gap between the startup of operations at the MFFF and the PDCF. The report’s authors evaluated several potential scenarios for increasing the amount of equipment and the number of work shifts at LANL and estimated that ARIES could produce up to 16.7 metric tons of plutonium oxide at a cost of over $700 million. In conducting its analysis, the report’s authors made a number of assumptions, including that space would be available within the Plutonium Facility 4 building to accommodate an expanded ARIES mission, and that LANL would be able to provide the necessary vault space to accommodate an expanded ARIES mission. However, recent GAO work raises questions about the validity of these assumptions. Specifically, in May 2008, we assessed NNSA’s plans to expand pit manufacturing operations within the Plutonium Facility 4 building. We found that NNSA would not be able to substantially increase its pit manufacturing capacity in the building for the foreseeable future because of several major constraints, including (1) limited vault space in the Plutonium Facility 4 building for storing pits and associated wastes and (2) competition for available floor space in the building due to the presence of other NNSA and DOE programs. For example, we found that vault space was one of the major limiting factors for pit production in fiscal year 2007, and that the vault was operating at 120 percent of its originally designed capacity. In a more recent study, NNSA concluded that LANL would not be a viable option to perform the entire pit disassembly and conversion mission. Specifically, in a November 2009 report, NNSA stated that the ARIES project would be unable to sustain the annual output of plutonium oxide feedstock necessary to support MFFF operations for a number of reasons. For example, the report stated that because the Plutonium Facility 4 building is a one-of-a-kind, mission-critical facility for national defense, national defense missions in the facility will continue to take precedence over other programs—including the pit disassembly and conversion mission—for the foreseeable future. In addition, the report pointed out several of the same constraints to expanding operations in the Plutonium Facility 4 building that we described in our prior report on pit manufacturing. NNSA’s November 2009 report also concluded that LANL continues to be a viable option to produce some additional plutonium oxide material to fill a potential gap if the PDCF project is delayed further. However, the report did not update the prior 2008 report to determine what additional amount of material it would be feasible for the ARIES project to produce. The report also did not provide estimates for how much an expanded ARIES mission would cost or when LANL would be able to produce additional plutonium oxide material. Instead, the report noted that NNSA would need to prepare and validate a detailed, resource-loaded, integrated schedule for an expanded ARIES mission. As a result, it remains uncertain whether ARIES could fill a potential gap if NNSA’s main pit disassembly and conversion operations are delayed. In March 2010, DOE stated that NNSA does not plan on expanding the current mission of the ARIES project until LANL demonstrates that it can sustain a production rate of 300 kilograms of plutonium oxide a year over an extended period of time. In addition, DOE stated that NNSA is evaluating other options to provide plutonium oxide feedstock to the MFFF prior to the start of pit disassembly and conversion operations. These options included (1) the use of 1.4 metric tons of fuel-grade plutonium—material originally not intended for use by the MFFF—already in storage at the K-Area Facility and (2) starting up “limited but sufficient” pit disassembly processes. NNSA’s current strategy relies on a number of technologies that are critical to establishing a pit disassembly and conversion capability. These technologies include the following systems and components: Pit disassembly—includes a lathe, manipulators, and grippers to cut pits, extract the plutonium, and prepare it for oxidation. Hydride dehydride—includes two furnaces to separate plutonium from other pieces of material. Direct metal oxidation—includes a furnace to convert plutonium and uranium metal into plutonium and uranium oxide. Oxide product handling—includes mill rollers and a blender to size and blend the plutonium oxide product. Product canning—includes an automated bagless transfer system to package the final product. Sanitization—includes a microwave furnace to melt components that do not contain plutonium or uranium. To demonstrate the viability of these technological components, DOE started the ARIES project at LANL in 1998. In addition, four other organizations are conducting testing and development activities in support of some of the critical technologies for pit disassembly and conversion: DOE’s Savannah River National Laboratory, DOE’s Pacific Northwest National Laboratory, the Clemson Engineering Technologies Laboratory, and a commercial vendor. Assessing technology readiness is crucial at certain points in the life of a project. Within DOE’s critical decision framework, such assessments are crucial at critical decision 2—acceptance of the preliminary design and approval of the project’s cost and schedule estimates as accurate and complete—and at critical decision 3—acceptance of the final design as sufficiently complete so that resources can be committed toward procurement and construction. Proceeding through these critical decision points without a credible and complete technology readiness assessment can lead to problems later in the project. Specifically, if DOE proceeds with a project when technologies are not yet ready, there is less certainty that the technologies specified in the preliminary or final designs will work as intended. Project managers may then need to modify or replace these technologies to make them work properly, which can result in costly and time-consuming redesign work. DOE has endorsed the use of the technology readiness level (TRL) process for measuring and communicating technology readiness in cases where technology elements or their applications are new or novel. In March 2008, DOE’s Office of Environmental Management published guidance on conducting technology readiness assessments and developing technology maturation plans. According to the guidance, staff should conduct technology readiness assessments using the TRL framework. Specifically, staff are to use a nine-point scale to measure TRLs. This scale ranges from a low of TRL 1 (basic principles observed) to a midlevel of TRL 6 (system/subsystem model or prototype demonstration in relevant environment) to a high of TRL 9 (total system used successfully in project operations). According to the guidance, for any critical technologies that did not receive a TRL of 6 or higher during such an assessment, staff should develop a technology maturation plan, which is supposed to describe planned technology development and engineering activities required to bring immature technologies up to the desired TRL of 6 or higher. This plan should include preliminary schedule and cost estimates to allow decision makers to determine the future course of technology development. In addition, the guidance stated that once a project reached the critical decision 2 stage, all critical technologies should have reached a TRL of 6. NNSA has undertaken a number of assessments of technological maturity and readiness for pit disassembly and conversion over the past decade as part of the ARIES project. For example, the PDCF project team carried out an evaluation of the maturity of ARIES equipment in 2003. According to project officials, the TRL framework was first used to assess the maturity of pit disassembly and conversion technologies in November 2008, in accordance with the Office of Environmental Management’s 2008 guidance. In addition, as part of an independent review of the PDCF project, NNSA issued a report in January 2009 that included a technology readiness assessment of the ARIES equipment and other critical technologies. The results of this assessment, as well as the earlier assessment conducted in 2008, are shown in table 1. As table 1 shows, there are a number of key technologies for pit disassembly and conversion that had not attained a TRL of 6. In accordance with the guidance on TRLs, NNSA should have a technology maturation plan in place to describe the planned technology development and engineering activities required to bring immature technologies up to the desired TRL of 6 or higher. According to NNSA officials, LANL had developed such a plan. However, we found that LANL’s plan lacked several key attributes of a technology maturation plan as described by DOE’s guidance. Specifically, we found the following problems with LANL’s plan: A technology maturation plan is supposed to be developed to bring all immature critical technologies up to an appropriate TRL. However, LANL’s plan only addressed the technologies under development at LANL as part of the ARIES project. The plan did not address technologies, such as the oxide product handling equipment, being tested by the four other organizations. For each technology assessed at less than TRL 6, a technology maturation plan should include preliminary schedule and cost estimates to allow decision makers to determine the future course of technology development. However, LANL’s plan did not include preliminary estimates of cost and schedule. LANL’s plan is dated November 2007. However, NNSA has conducted or sponsored two technology readiness assessments of the PDCF critical technologies since that date. As a result, LANL’s plan is out of date and does not take into account the current state of maturity of its critical technologies. NNSA officials told us that while they recognize some of the problems with the project’s existing technology maturation plan, they have already prepared budget and schedule estimates for technology development activities in a number of separate documents (including the overall PDCF project schedule). However, they still have not updated the current technology maturation plan in accordance with DOE guidance. Until such an update is completed, it is uncertain whether these technologies will be sufficiently mature in time to meet the current, aggressive schedule for establishing a PDCF capability. NNSA has offered several incentives to attract customers for its MOX fuel and is working toward a formal agreement for the Tennessee Valley Authority (TVA) to purchase most of this fuel. However, NNSA’s outreach to other utilities may not yet be sufficient to inform potential customers of incentives to use MOX fuel. NNSA and its contractor for the MFFF project, MOX Services, have established a production schedule for the fabrication of MOX fuel assemblies from surplus, weapons-grade plutonium. According to the current production schedule, the MFFF is to produce 8 MOX fuel assemblies in 2018, the initial year of production. The MFFF’s production rate is then to increase over the next 5 years up to a maximum rate of 151 fuel assemblies per year (see fig. 3). The MFFF is expected to produce 1,700 fuel assemblies during its production run. In addition, according to NNSA’s plans, these fuel assemblies will be designed for use in pressurized water nuclear reactors, which are the most common type of nuclear reactor in use in the United States. In June 2000, Duke Power (now Duke Energy Carolinas, LLC, or Duke), a power utility that operates seven pressurized water reactors in North Carolina and South Carolina, signed a subcontract with NNSA’s contractor for the MFFF project, MOX Services. According to NNSA officials, this subcontract gave the utility the option to purchase up to three-fourths of the MOX fuel produced by the MFFF at a discount relative to the price of normal reactor fuel, which uses low enriched uranium. According to the officials, the subcontract also obligated MOX Services to compensate Duke if the MOX fuel was not delivered by December 2007. However, as project delays continued to push back the start of construction, Duke, MOX Services, and NNSA began discussions in 2005 to renegotiate the subcontract. After nearly 3 years of discussions, Duke and MOX Services were unable to reach agreement by the negotiation deadline, and the subcontract automatically terminated on December 1, 2008. As negotiations with Duke came to an end, MOX Services, at NNSA’s direction, issued a request to nuclear utilities in October 2008 to express their interest in the MOX fuel program. The request outlined a number of possible incentives to mitigate the risks to utilities in using MOX fuel— risks that include the need to modify reactors and obtain an operating license amendment from NRC to use MOX fuel. For example, the request discussed the possibility of (1) selling MOX fuel at a discount relative to the price of uranium fuel and (2) paying for costs associated with modifying a reactor and obtaining an operating license amendment from NRC. Furthermore, in January 2009, DOE reserved 12.1 metric tons of highly enriched uranium from its stockpile and hired a contractor to downblend this amount into 155 to 170 metric tons of low enriched uranium to serve as a backup supply of fuel if MOX fuel deliveries to customers are delayed. As of December 2009, NNSA and MOX Services were still working on an agreement on liability if fuel is not delivered on time. According to NNSA officials, three utilities have responded to MOX Services’ request and have expressed interest in the MOX fuel program. Notably, in February 2010, NNSA and TVA executed an interagency agreement to fund TVA studies on the use of MOX fuel in five of TVA’s reactors. Under the agreement, TVA will perform work on core design, licensing, modifications, and other related activities to evaluate the use of MOX fuel in its reactors. According to an NNSA official, using MOX fuel in five of TVA’s reactors could account for up to 85 percent of the MFFF’s output. The official also stated that an agreement with TVA to become a customer could be signed by the fall of 2010. TVA officials stated that they believed that familiarity gained by working with DOE during the Blended Low Enriched Uranium would help them work with DOE during the MOX program and cited this factor in their decision to begin discussions about becoming a customer for MOX fuel. Aside from TVA, NNSA officials characterized their contact with two other utilities as in the preliminary stages, and they could not estimate when or if they would secure them as customers for MOX fuel. Because utilities typically contract with fuel suppliers at least 5 years in advance, NNSA and MOX Services will need to secure customers several years before they deliver MOX fuel to them. NNSA officials said that their goal is to obtain at least one customer by the end of fiscal year 2010, in part because the 5-year period during which the MFFF will increase its production capacity will allow them additional time to secure more customers. Furthermore, if TVA agrees to be a customer and uses MOX fuel in five of its reactors, these officials said that NNSA may only need one additional utility to account for the remainder of the MFFF’s planned production of MOX fuel assemblies. However, NNSA faces two main obstacles in obtaining TVA as its primary customer. First, some of TVA’s reactors that would be candidates for using MOX fuel may not be permitted to use the fuel due to their status as backup reactors in DOE’s tritium production program. According to NNSA officials, the 2000 U.S.-Russian plutonium disposition agreement could be interpreted as precluding reactors involved in weapons production from being used to dispose of MOX fuel. TVA officials told us that they are working with DOE to transfer tritium production responsibilities to another TVA reactor that is not presently a candidate for the MOX program. Second, although NNSA currently plans to produce MOX fuel assemblies for use in pressurized water reactors, three of TVA’s reactors that are candidates for burning MOX fuel are boiling water reactors. NNSA officials told us that they are studying how the MFFF can be reconfigured to produce fuel assemblies for boiling water reactors. In particular, they stated that the MFFF’s design is based on a French MOX Facility, which can switch production between fuel assemblies for pressurized water reactors and for boiling water reactors in about 10 to 20 days. However, the officials also stated that they might need to conduct additional tests on using MOX fuel assemblies in boiling water reactors before producing the fuel assemblies in large quantities, and that it was unclear whether such tests would delay the MOX production schedule. In March 2010, DOE stated that NNSA is evaluating several options for providing alternative sources of plutonium oxide material to the MFFF prior to the start of pit disassembly and conversion operations. One option under consideration is to adjust the “quantity and timing in providing initial fuel deliveries” to potential customers. We interviewed fuel procurement officials at 22 of the nation’s 26 nuclear utilities to determine the extent to which nuclear utilities are interested in participating in DOE’s MOX fuel program and to evaluate what factors may influence their interest. The factors we asked about were based on input we received from industry experts, DOE officials, and former utility officials. (For a list of the structured interview questions that we asked utilities, see app. IV.) As shown in table 2, utility officials most often identified the following factors as very or extremely important when assessing their level of interest in participating in the MOX fuel program: consistent congressional funding of the program, DOE’s ability to ensure timely delivery of MOX fuel, DOE’s ability to ensure the timely delivery of a backup supply of uranium the cost of MOX fuel relative to the cost of reactor fuel, and the opportunity to test MOX fuel in their reactors prior to full-scale use. We then asked utilities about possible incentives—some of which have already been proposed by NNSA and DOE—that may affect their interest in becoming program participants. We also asked about scenarios in which DOE offered a discount of 15 percent and 25 percent for MOX fuel relative to the price of regular reactor fuel. As shown in table 3, DOE’s payment for costs associated with reactor modifications and NRC licensing to use MOX fuel—two incentives DOE has actually proposed to utilities—resulted in the largest number of utilities expressing increased interest in participating in the MOX fuel program. However, despite the incentives offered, as of October 2009 the majority of the utilities that we interviewed expressed little or no interest in becoming MOX fuel customers. Specifically, 12 utilities reported they were either not interested or not very interested in becoming MOX fuel customers, 8 utilities were somewhat interested, and only 2 utilities indicated that they were currently very interested or extremely interested in the program. Three utilities indicated that they were currently interested enough to consider contacting DOE about becoming MOX fuel customers. When asked to consider the proposed incentives, however, 8 utilities expressed such interest. NNSA officials stated that they have communicated their willingness to provide incentives to potential customers. However, neither NNSA nor MOX Services has provided additional outreach or information to utilities in general since the October 2008 request for expression of interest. Furthermore, 11 utilities responded in our interviews that they had heard or read very little about the MOX fuel program, while 5 responded that they had received no information. In our view, the fact that so few utilities expressed sufficient interest in even contacting NNSA and MOX Services suggests that NNSA’s outreach may not be sufficient. NRC has primary regulatory responsibility for nuclear safety at the MFFF, and NRC’s activities to date have included authorizing construction, identifying safety-related issues with construction, and reviewing the license application for the operation of the facility. DOE has primary regulatory responsibility for nuclear safety at the WSB and has looked at some aspects of nuclear safety for both the MFFF and the WSB as part of its management reviews. However, oversight by DOE’s independent nuclear safety entities has been limited. NRC is responsible for licensing the MFFF to produce fuel for commercial nuclear reactors. To do so, NRC is using a two-stage review and approval process: the first stage is construction authorization, and the second stage is license application approval. The construction authorization stage began in February 2001, when the MFFF contractor submitted an application to begin construction. As part of the construction authorization review, NRC reviewed key documents, including the project’s preliminary safety designs, environmental impact statement, and quality assurance plan. NRC approved the facility’s construction authorization request in March 2005. NRC began its review of the MFFF project’s application for a license to possess and use radioactive materials in December 2006. NRC has divided the license review into 16 areas, including criticality/safety, chemical processing, and fire protection. NRC has issued requests for additional information for each of the 16 review areas. According to NRC officials, once NRC staff obtain all of the necessary information in a given area, they prepare a draft section for that area to be included in the draft Safety Evaluation Report for the facility. As shown in table 4, NRC had drafted sections for 6 of the 16 review areas as of January 2010. Once all of the draft sections are complete, NRC staff will prepare a draft safety evaluation report and, after concurrence from NRC management, will submit them to NRC’s Advisory Committee on Reactor Safeguards—a committee of experts that is independent of the staff and that reports directly to NRC’s commissioners—for review and comment. NRC staff are then to incorporate, at their discretion, the committee’s comments into the license approval document and issue a final safety evaluation report for the facility, which NRC expects to occur in December 2010. Once NRC completes the licensing review and verifies that MOX Services has completed construction of the primary structures, systems, and components of the MFFF, it may issue the license. NRC officials stated that they could issue the license by 2014 or 2015, depending on the construction status of the facility. One issue that NRC raised during its review of the MFFF project is the design of safety controls to prevent a chemical reaction known as a “red oil excursion.” Specifically, in January 2004, during the construction authorization stage, a senior NRC chemical safety reviewer stated that the MFFF’s planned safety controls to prevent a red oil excursion differed from those recommended by DOE and the Defense Nuclear Facilities Safety Board. In response, NRC convened a panel in March 2005 to evaluate the reviewer’s concerns. The panel issued a report in February 2007 concluding that although NRC’s construction authorization of the MFFF did not need to be revisited, there was wide agreement among NRC staff and the Advisory Committee on Reactor Safeguards that significant technical questions remained unanswered about the MFFF’s planned safety controls. To address these technical questions, NRC has taken a number of actions, including the following: NRC engaged the assistance of the Brookhaven National Laboratory to provide two independent assessments of the risk of a red oil excursion at the facility. Brookhaven National Laboratory issued an initial report in March 2007 and a follow-up report in August 2009 in which it examined updated safety information provided by MOX Services. The second of the two reports concluded that the risk of a red oil excursion at the facility is highly unlikely. During the current licensing application stage, NRC officials have requested and received additional information from MOX Services related to planned safety controls to prevent a red oil excursion. However, as of our review, NRC staff had not completed their draft safety evaluation report for this area. NRC’s oversight responsibilities also include inspecting the construction of the MFFF as well as the project’s own quality assurance plan. NRC’s Division of Construction Projects, based in NRC’s Region II headquarters in Atlanta, conducts periodic inspections of the MFFF that assess the design and installation of the facility’s principal structures, systems, and components and verify that the project’s quality assurance program is adequately implemented. These inspections involve document reviews and site inspections over several weeks and can include specialty reviews in welding, concrete, and other construction subject areas. NRC evaluates the MFFF’s construction against standards set by the American Concrete Institute and the American Society of Mechanical Engineers, among others. In addition to the Region II inspections, NRC maintains one resident inspector at the construction site who conducts day-to-day inspection activities, such as walk-throughs. NRC also plans to hire an additional full-time resident inspector for the MFFF in fiscal year 2010. As part of its ongoing inspection of the construction of the MFFF, NRC has issued 16 notices of violation against MOX Services since the start of construction in August 2007 related to various subjects, including quality assurance and control over design changes. (See app. V for a complete list and description of NRC notices of violation.) Although NRC has classified all of the violations to date as severity level IV, the lowest safety- significant designation in its four-category scale, the violations have had an effect on the project’s schedule. In addition to its regular construction reviews, NRC issues periodic assessments of the contractor’s performance. In its latest assessment, released in November 2009, NRC concluded that MOX Services had conducted its overall construction activities at the MFFF in an acceptable manner. However, NRC also determined that MOX Services must improve its control over changes to the MFFF’s design and increase its attention to its quality assurance oversight of vendors. NRC identified several examples of deficiencies associated with performing, verifying, and documenting design changes and noted failures on the part of MOX Services to adequately translate requirements into design and construction documents. In addition, NRC concluded that its finding of a violation related to MOX Services’ vendor oversight indicates “a challenge to [MOX Services’] quality assurance staff to provide effective oversight of vendors that perform work on, fabricate, or supply components and equipment for use at the MFFF.” In its assessment, NRC stated that it will conduct additional inspections to assess the effectiveness of MOX Services’ corrective actions. In response to NRC’s assessment, MOX Services stated that it is taking steps to strengthen its design control process, such as increasing training for quality control supervisors; introducing quality control checklists into its subcontractor and construction procedures; and conducting oversight visits to vendors. Although DOE has incorporated elements of nuclear safety in management reviews of the MFFF and the WSB projects that were conducted as part of its critical decision review process, DOE’s independent nuclear safety entities were minimally involved. As part of the critical decisions 2 and 3 review process for the MFFF project, OECM conducted a review of the MFFF project during April and May, 2006, which included nuclear safety as one of several review areas. A review team comprising independent consultants and former DOE officials evaluated, among other things, the integration of nuclear safety into the project’s environmental, safety, and health programs, as well the contractor’s process for addressing issues found by NRC. The review identified one finding related to safety, noting that the ongoing revision of the project contract could introduce conflicts with NRC regulations. NNSA accepted the review’s recommendation to develop a memorandum of understanding with NRC to resolve this issue. Regarding the WSB project, OECM conducted a review during September 2008—as part of the critical decision 2 process—that included nuclear safety as one of several review areas. The review team examined key WSB documents related to nuclear safety, including the facility’s safety evaluation report, preliminary documented safety analysis, and the design hazard analysis report. The review team recommended that an additional hazard analysis for one system be performed but determined that overall, the hazard analyses and safety assessments for the WSB were comprehensive and complete. In addition, NNSA’s Office of Project Management and Systems Support conducted another review of the WSB project during September 2008 as part of the critical decision 3 process. Because it was almost simultaneous with OECM’s review, NNSA’s review was less comprehensive and focused specifically on the WSB’s ability to protect against a red oil excursion. This review resulted in a single recommendation, that is, for additional justification for the inclusion of certain equipment in the facility’s design. In response to the recommendation, the WSB project team submitted a revised safety evaluation report justifying the equipment. HSS is responsible for policy development, enforcement, and independent oversight in the areas of health, safety, the environment, and security across DOE. To accomplish this responsibility, this office performs appraisals to verify, among other things, that the department’s employees, contractors, the public, and the environment are protected from hazardous operations and materials. However, these appraisals are designed to complement, not duplicate, program office oversight and self-assessments. In particular, HSS conducts visits to DOE sites and reviews a sample of facilities at those sites, including construction activities for new facilities. In addition, according to HSS officials, the office assists DOE’s program offices by conducting reviews of documents supporting the safety basis— which is a technical analysis that helps ensure the safe design and operation of DOE’s nuclear facilities—of a sample of high hazard nuclear facilities at a DOE site. For example, in response to our October 2008 report, which found that HSS was not conducting reviews of the safety basis of new, high-hazard nuclear facilities, HSS issued a new appraisal process guide in July 2009 that emphasized increased focus on the safety basis at such facilities. Finally, HSS has other oversight and advisory responsibilities related to nuclear safety during critical decision reviews for major DOE facilities. These responsibilities are spelled out in DOE’s Order 413.3A, which provides direction on program and project management for the acquisition of capital assets, and include the following actions: participating on the Energy Systems Acquisition Advisory Board—a body comprising senior DOE officials who advise DOE’s Secretarial Acquisition Executive in critical decisions regarding major projects and facilities; advising the DOE Secretarial Acquisition Executive on environmental, safety, and security matters related to all critical decision approvals; serving on independent project reviews as a team member at the request of the Secretarial Acquisition Executive or program officials; and participating on external independent reviews as an observer at OECM’s request. Regarding the MFFF project, HSS has provided limited oversight. According to HSS officials, a more limited amount of oversight is appropriate for the MFFF because of the National Defense Authorization Act of 1999, which gave NRC responsibility for regulating nuclear safety at the MFFF. HSS has conducted some inspection activities at the MFFF, including reviewing reinforced concrete and structural steel at the facility during site visits to SRS in August and September, 2009. However, HSS officials said that these activities did not include a review of documents supporting the MFFF’s safety basis. In addition, while HSS officials stated that personnel from HSS’s predecessor office participated in the critical decisions 2 and 3 reviews for the MFFF project during 2006, HSS was unable to provide any documentation to substantiate this statement. According to department officials, HSS had limited resources for conducting reviews and needed to focus its resources on facilities that were not subject to external regulation. Regarding nuclear safety oversight of the WSB project, which is solely regulated by DOE, we found that HSS had not conducted any oversight activities or participated in any critical decision reviews. Specifically, HSS officials told us that they have not reviewed any documents supporting the WSB’s safety basis, nor have they conducted any inspection activities at the WSB construction site. Despite the issuance of HSS’s new appraisal process guide, which contains inspection protocols for new and unfinished high-hazard nuclear facilities, an HSS official told us that the office has yet to determine when they will inspect the WSB. An HSS official told us that he was uncertain whether a WSB inspection would occur in 2010 because an ongoing internal DOE review has delayed the development of the office’s 2010 inspection schedule. However, if HSS’s initial visit occurs later than 2010, NNSA will have already completed at least half of the WSB’s construction, according to the project’s schedule. Additionally, HSS did not participate in any of the critical decision reviews for the WSB project because of existing DOE guidelines. Specifically, although the WSB is considered a category 2 (high-hazard) nuclear facility, it is categorized as a nonmajor project. According to DOE’s order, HSS is not required to participate on the review board for a nonmajor project. In addition, neither OECM nor NNSA requested HSS to participate on the project reviews conducted for critical decisions 2 and 3. DOE’s Order 413.3A calls for the NNSA Central Technical Authority to maintain operational awareness regarding complex, high-hazard nuclear operations, and to ensure that DOE’s nuclear safety policies and requirements are implemented adequately and properly. The order also directs the CDNS to support the Central Technical Authority in this effort by participating as part of the Energy Systems Acquisition Advisory Board for major facilities, or similar advisory boards for minor facilities; providing support to both the Central Technical Authority and the Acquisition Executive regarding the effectiveness of efforts to integrate safety into design at each of the critical decisions, and as requested during other project reviews; determining that nuclear facilities have incorporated the concept of defense-in-depth into the facility design process; validating that the integration of design and safety basis activities includes the use of a system engineering approach tailored to the specific needs and requirements of the project; and validating that federal personnel assigned to projects as nuclear safety experts are appropriately qualified. The CDNS’s manual for implementing DOE Order 413.3A provides additional guidance, such as establishing the responsibilities of CDNS staff for evaluating safety activities at nuclear facilities. The manual also directs the head of the CDNS to participate in relevant staff meetings for NNSA projects that are requesting a decision from the Energy Systems Acquisition Advisory Board, an activity that may not be delegated for major projects. However, according to the head of the CDNS, his office has not participated in any safety review activities at the MFFF because NRC is regulating nuclear safety at the facility. The head of the CDNS acknowledged that his office’s approach to overseeing nuclear safety for the MFFF project does not follow the guidance set out in DOE orders and related manuals and has not been formally adopted by NNSA. He stated this approach is necessary to make more efficient use of CDNS resources by focusing oversight activities on facilities regulated entirely by DOE. According to NNSA officials, DOE Order 413.3A does not explicitly exempt the CDNS from overseeing facilities regulated by NRC. Agency officials stated that NNSA is working with the Department to have that exemption inserted into the order during an upcoming revision of the order. NNSA officials stated that, historically, there was never an intention that the CDNS would have responsibilities for facilities regulated by NRC, and that this needs to be clarified in the order. The CDNS has provided some oversight of the WSB project, but according to the head of the CDNS, this oversight has been limited, due in part to difficulty in applying DOE’s guidance to the WSB and staffing issues. The CDNS participated as an observer on the advisory board for the WSB project during the project’s critical decisions 2 and 3 processes. However, the head of the CDNS said that he had no record of whether his office participated in or evaluated the results of OECM’s review during the critical decision 2 process, which included several lines of inquiry related to nuclear safety. During the critical decision 3 process for the WSB project, CDNS staff reviewed key project safety documents to determine how the facility would protect against a red oil excursion and determine the qualifications of the federal staff person assigned to the project as a nuclear safety expert. Despite these efforts, the head of the CDNS told us that during the critical decision 3 review, his office experienced some difficulty in implementing the guidance established in DOE orders for the WSB project. The office’s current policy is to review a project’s safety documentation early in the design process and determine whether it conforms to DOE’s relevant safety standard for integrating safety into design and incorporating defense-in-depth. The WSB project had completed its design work before DOE issued its current standard, and before the CDNS implemented a systematic approach to fulfilling its functions. Consequently, the CDNS did not perform a systematic review of WSB safety documentation. The CDNS head characterized the WSB review as being an ad hoc, qualitative assessment of some of the project’s safety documentation. Additionally, the CDNS has not evaluated the qualifications of the nuclear safety expert that replaced the one evaluated as part of the critical decision 3 review. However, according to the head of the CDNS, his office only plans to evaluate the qualifications of new staff during technical reviews of the project, not after every change to the project team’s composition. The head of the CDNS told us that his office has begun developing a more systematic approach to evaluating the design safety of DOE facilities. In addition, he stated that he would like to conduct additional safety reviews of facilities currently in design and construction. However, he said that these efforts have been hampered, in part due to staffing shortages. For example, the CDNS had a staff of 13 people in 2007. As of December 2009, however, only 4 people remained on the CDNS staff due to attrition and NNSA’s decision to transfer some of the personnel into other program offices. The head of the CDNS stated that current staffing levels have led the CDNS to focus its attention on projects that are still in the design phase. He said that it was doubtful that the CDNS would return to the WSB to ensure that safety basis controls are fully integrated during its construction. Concerns over CDNS staffing issues also were raised by the Defense Nuclear Facilities Safety Board. Specifically, in its March 2009 letter to the Secretary of Energy, the safety board noted that reduced staff levels and the transfer of CDNS personnel into NNSA’s program offices have reduced the effectiveness of the office. NNSA is already over 2 years into its construction schedule for the MFFF and expects the facility to become operational by 2016. It has also established a production schedule for fabricating up to 151 MOX fuel assemblies per year at full production. However, the agency faces uncertainty as to (1) its ability to supply the MFFF with sufficient quantities of plutonium oxide feedstock to meet its planned production schedule of MOX fuel and (2) the demand for MOX fuel assemblies from potential customers. Regarding the supply of plutonium oxide feedstock, NNSA only has a limited quantity of feedstock on hand to supply the MFFF prior to the start of pit disassembly operations. However, NNSA has not established a definitive strategy for pit disassembly operations, nor does it expect to do so in the near future. As a result, it appears unrealistic that NNSA will be able to meet its current production schedule for MOX fuel without obtaining additional sources of plutonium oxide. NNSA has stated that while it does not plan on expanding the current mission of the ARIES project until LANL demonstrates a sustained production rate over an extended period of time, it is evaluating other options to address this potential shortfall of plutonium oxide. These options include (1) the use of 1.4 metric tons of fuel-grade plutonium already in storage at the K-Area Facility, (2) starting up “limited but sufficient” pit disassembly processes, and (3) adjusting the “quantity and timing” in delivering MOX fuel to potential customers. We have concerns with these options, including: NNSA’s use of a “wait-and-see” approach to the ARIES project, and the implications this may have on the ability of the ARIES project to meet its current and future production goals; the implications of the use of fuel-grade plutonium on the design and safety of the MFFF, and the extent to which DOE has adequately determined how much additional material throughout the DOE complex may be suitable and available for use by the MFFF; how DOE plans to establish limited pit disassembly processes given the current lack of a definitive strategy for pit disassembly operations; and how DOE plans to adjust the MOX fuel production schedule, and the implications this may have on the cost and schedule for operating the MFFF and DOE’s ability to attract potential MOX fuel customers. In addition to these concerns, while NNSA’s strategy relies on critical technologies currently under development at LANL and other sites for pit disassembly and conversion operations, its current technology maturation plan does not meet DOE’s current guidance because the plan is outdated and incomplete. Without a plan that provides more details on the options DOE has mentioned to increase the supply of plutonium oxide, or a comprehensive technology maturation plan, it is uncertain whether NNSA will be able to meet the MFFF’s planned production schedule. Regarding obtaining customers for MOX fuel assemblies, our survey of utilities indicated that some utilities might be interested in becoming customers but appear unaware of the incentives NNSA and DOE are offering. Without additional outreach, NNSA may not be able to obtain sufficient customers for the MOX fuel it plans to produce, which would leave the agency with nuclear material it cannot dispose of and the U.S. Treasury with a forgone opportunity for revenue. Although DOE incorporated some aspects of nuclear safety oversight in its management reviews of the MFFF and WSB projects, oversight by HSS and the CDNS has been limited. Specifically, HSS has conducted limited oversight activities at the MFFF but has played no role in the WSB project because of its designation as a nonmajor project. Conversely, the CDNS has played no role in the MFFF project and has provided some elements of nuclear safety oversight for the WSB project. However, it has not fully met the responsibilities laid out for it by DOE order, in part due to a lack of a formal, standardized approach for reviewing project safety documents. We believe that HSS’s exclusion from the WSB project reviews, as well as the limited involvement of the CDNS in the WSB project reviews, creates a gap in oversight of the WSB and similar facilities. We are making the following five recommendations. To address uncertainties associated with NNSA’s plans to establish a pit disassembly and conversion capability, we recommend that the Administrator of the National Nuclear Security Administration take the following three actions: Develop a plan to mitigate the likely shortfall in plutonium oxide feedstock for the MFFF prior to the start of pit disassembly operations. This plan should include, at a minimum, the following five items: (1) the actions needed to ensure that the ARIES project will meet its existing production goals, and the cost and schedule associated with any needed expansion of the project; (2) an assessment of how much additional plutonium material, including fuel-grade plutonium, is available within the DOE complex for use as feedstock for the MFFF; (3) an assessment of the effect on the design and safety of the MFFF from the use of fuel-grade plutonium as feedstock; (4) an assessment of potential changes to the MOX fuel production schedule and the effect of these changes on the cost and schedule for operating the MFFF; and (5) an assessment of the cost and schedule associated with obtaining a limited but sufficient pit disassembly process to produce feedstock for the MFFF. Develop a technology maturation plan for the pit disassembly and conversion mission that (1) includes all critical technologies to be used in pit disassembly and conversion operations and (2) provides details (including preliminary cost and schedule estimates) on planned testing and development activities to bring each critical technology up to a sufficient level of maturity. Conduct additional outreach activities to better inform utilities about the MOX fuel program and related incentives. To ensure that the WSB and similar projects receive consistent nuclear safety oversight that is independent from the DOE program offices, we make the following two recommendations: The Secretary of Energy should revise DOE Order 413.3A to provide that HSS participate in key project reviews for the WSB and similar high-hazard facilities prior to the beginning of construction activities regardless of their status as nonmajor projects. The Administrator of NNSA should ensure that the CDNS conducts oversight activities to the extent called for by DOE Order 413.3A and establishes a formal, standardized approach to reviewing safety documentation. We provided the Department of Energy, the National Nuclear Security Administration, and the Nuclear Regulatory Commission with a draft of this report for their review and comment. In commenting on the draft report, the NNSA Associate Administrator for Management and Administration said that DOE agreed with the report and its recommendations. However, we have concerns about DOE’s response to one of our recommendations. Specifically, in commenting on our recommendation in a draft report that NNSA should develop a plan for expanding the ARIES project to produce additional quantities of plutonium oxide feedstock for the MFFF, DOE stated that NNSA is also evaluating other options for producing additional feedstock material for the MFFF, including (1) the use of 1.4 metric tons of fuel-grade plutonium already in storage at the K- Area Facility, (2) starting up “limited but sufficient” pit disassembly processes, and (3) adjusting the “quantity and timing” in delivering MOX fuel to potential customers. This information was not disclosed to us during our review, and we have a number of concerns about these options. For example, regarding the option to process fuel-grade plutonium, the MFFF was designed to process weapons-grade plutonium, not fuel-grade plutonium. As a result, we are concerned about the implications of this option on the design and safety of the MFFF. We are also concerned about the extent to which DOE has adequately determined how much additional material might be available throughout the DOE complex for use as an alterative source of feedstock for the MFFF. To address these concerns, we revised our conclusions and expanded our original recommendation to ensure that NNSA establishes a plan to more clearly explain its strategy for mitigating the likely shortfall in plutonium oxide feedstock for the MFFF prior to the start of pit disassembly operations. DOE’s written comments are reprinted in appendix VI, and NRC’s written comments are reprinted in appendix VII. In addition, DOE and NRC provided detailed technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Energy, the Administrator of NNSA, and other interested parties. We will also make copies available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VIII. To assess the performance status of the MOX Fuel Fabrication Facility (MFFF) and the Waste Solidification Building (WSB) construction projects regarding cost and schedule, we requested and analyzed earned value management (EVM) data contained in the projects’ monthly reports and variance reports, as well as EVM data for the MFFF project contained in Excel spreadsheets. We assessed the adequacy of the MFFF project’s use of EVM reporting by using a set of analysis tasks developed by GAO. In addition, we assessed the reliability of the EVM data by evaluating each project’s schedule against GAO’s scheduling best practices. We have previously identified nine key practices necessary for developing a reliable schedule. These practices are (1) capturing all activities, (2) sequencing activities, (3) establishing the duration of activities, (4) assigning resources to activities, (5) integrating activities horizontally and vertically, (6) establishing the critical path for activities, (7) identifying the float time between activities, (8) performing a schedule risk analysis, and (9) monitoring and updating the schedule. To assist us in these efforts, we contracted with Technomics, Inc., to perform an in-depth analysis of data used in the MFFF’s integrated master schedule and the WSB’s current schedule. For the MFFF project, we also conducted a review of the project’s schedule risk analysis, which was performed during the summer of 2009. We also interviewed officials from the Department of Energy’s (DOE) National Nuclear Security Administration (NNSA) and MOX Services regarding their use of EVM data, scheduling practices, and schedule risk analyses for the two projects. Finally, we conducted tours of the MFFF construction project at DOE’s Savannah River Site (SRS), and met officials from the MFFF’s contractor, MOX Services, Inc.; and DOE’s NNSA and Office of Engineering and Construction Management (OECM). To assess the status of NNSA’s plan to establish a pit disassembly and conversion capability to supply plutonium to the MFFF, we reviewed documentation provided by NNSA and its contractors for the Pit Disassembly and Conversion Facility (PDCF), Plutonium Preparation Project, K-Area Complex, and MFFF projects, including project execution plans, project status reports, EVM data, and independent project reviews. We also requested information from NNSA on risks associated with the development of technology used in pit disassembly and conversion. We analyzed these risks using DOE guidance on assessing technology readiness. We also reviewed project plans, testing and development data, and feasibility studies related to the Advanced Recovery and Integrated Extraction System (ARIES) project. We also toured the ARIES facility at DOE’s Los Alamos National Laboratory (LANL) in New Mexico and interviewed officials involved in the project. To assess the status NNSA’s plans to obtain customers for mixed-oxide (MOX) fuel from the MFFF, we reviewed project documents, including interest requests communicated to utilities, descriptions of possible incentives for participating in the MOX program, and analyses on the expected return to the government from the sale of MOX fuel. We also interviewed officials from NNSA and the Tennessee Valley Authority (TVA) on current efforts to secure TVA as a customer for MOX fuel, as well as officials from Duke on factors that caused the utility to end its agreement with NNSA’s contractor to purchase MOX fuel. To further identify factors affecting utilities’ interest in the MOX fuel program, we conducted structured telephone interviews of U.S. nuclear utilities. We chose to interview fuel procurement officers because they would be the most knowledgeable respondents about factors affecting fuel purchasing decisions, including considerations for MOX fuel. We asked fuel procurement officers to provide information on their currents interest in MOX fuel, important factors in the consideration of using MOX fuel, and possible incentives for the adoption of MOX fuel. To develop the structured interview questionnaire, GAO social science survey specialists and GAO staff developed a draft of the questionnaire on the basis of survey design principles and information obtained in interviews with DOE and nuclear utility officials. The draft questionnaire underwent a blind review by an additional social science survey specialist and was edited to ensure consistency among questions and clearly defined terms. The revised draft questionnaire was then pretested on three respondents, all of whom were familiar with the nuclear fuel procurement process. During the pretests, respondents were asked about their understanding of the questions, how they would approach constructing their answers, and any editorial concerns. The draft questionnaire underwent a final revision before being used to conduct the structured telephone interviews. Structured interviews were completed by fuel procurement officials from 22 of the 26 nuclear utilities in the United States, for an overall response rate of 85 percent. All of the interviews were conducted during September and October, 2009. Respondents were contacted in advance to schedule a time to complete the interview. One of the 22 responding utilities elected not to answer three of the interview questions, but the other 21 completed the entire questionnaire. Data from the interviews were recorded and entered by the interviewer. A social science analyst performed a 100 percent check of that data entry by comparing them with their corresponding questionnaires, to ensure that there were no errors. To examine the actions that NRC and DOE have taken to provide independent nuclear safety oversight of the MFFF and WSB construction projects, we reviewed oversight documentation and reports and interviewed oversight officials from both agencies. In relation to NRC’s oversight activities, we examined documents related to NRC’s approval of the MFFF’s construction authorization request; information requests submitted by NRC to MOX Services in support of NRC’s ongoing review of the facility’s operating license application; and technical analyses conducted by Brookhaven National Laboratory on behalf of NRC examining the likelihood of a red oil excursion at the facility. We also reviewed documents related to NRC’s construction inspection program, including inspection guidance and procedures, inspection reports, periodic assessments of MOX Services’ performance, and MOX Services’ responses to inspection findings. We also interviewed officials from the Nuclear Regulatory Commission’s Office of Nuclear Materials Safety and Safeguards and the Region II Division of Construction Projects. In relation to DOE’s inspection activities, we reviewed DOE project management and nuclear safety oversight guidance, protocols for conducing facility inspections, inspection reports, and records of decision related to reviews conducted by DOE’s Office of Health, Safety, and Security (HSS) and the Chief of Defense Nuclear Safety. We also reviewed reports by the Defense Nuclear Facilities Safety Board on DOE oversight and interviewed Safety Board officials. We interviewed officials from NNSA’s Office of Fissile Materials Disposition, HSS’s Office of Independent Oversight, and the Chief of Defense Nuclear Safety. We conducted this performance audit from January 2009 to March 2010, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. program’s work breakdown structure, including activities to be performed by both the government and its contractors. The project has provided evidence that the schedule reflects both government and contractor activities, such as the building and testing of software components, as well as key milestones for measuring progress. Sequencing activities The schedule should be planned so that it can meet critical program dates. To meet this objective, activities need to be logically sequenced in the order that they are to be carried out. In particular, activities that must finish prior to the start of other activities (i.e., predecessor activities), as well as activities that cannot begin until other activities are completed (i.e., successor activities), should be identified. By doing so, interdependencies among activities that collectively lead to the accomplishment of events or milestones can be established and used as a basis for guiding work and measuring progress. The schedule should avoid logic overrides and artificial constraint dates that are chosen to create a certain result. Of the approximately 22,000 normal activities, all are logically sequenced—that is, the schedule identifies interdependencies among work activities that form the basis for guiding work and measuring progress. The schedule should realistically reflect how long each activity will take to execute. In determining the duration of each activity, the same rationale, historical data, and assumptions used for cost estimating should be used. Durations should be as short as possible and have specific start and end dates. In particular, durations of longer than 200 days should be minimized. Of the 22,000 normal activities, only 569 have durations of over 200 days. In addition, the schedule includes 38 activities with a remaining duration over 500 days and 10 activities with remaining duration over 1,000 days (3.9 years). The schedule should reflect what resources (e.g., labor, material, and overhead) are needed to do the work, whether all required resources will be available when needed, and whether any funding or time constraints exist. Of the 22,000 normal activities, resources are placed on 3,124 of these, and 13,988 of these have no resources. However, the program does have all resources captured in an alternate software package. According to DOE, the current baseline reflects $2.2 billion. The schedule should be horizontally integrated, meaning that it should link the products and outcomes associated with other sequenced activities. These links are commonly referred to as “handoffs” and serve to verify that activities are arranged in the right order to achieve aggregated products or outcomes. The schedule should also be vertically integrated, meaning that traceability exists among varying levels of activities and supporting tasks and subtasks. Such mapping or alignment among levels enables different groups to work to the same master schedule. Due to concerns about total float values discussed below in “identifying float between activities,” the schedule has not fully integrated key activities horizontally. The schedule has sufficiently integrated key activities vertically. Using scheduling software, the critical path—the longest duration path through the sequenced list of key activities— should be identified. The establishment of a program’s critical path is necessary for examining the effects of any activity slipping along this path. Potential problems that might occur along or near the critical path should also be identified and reflected in the scheduling of the time for high- risk activities. The project has established a number of critical paths by using the scheduling software to identify activities with low or zero float, as well as by identifying high-risk activities. Project officials said that they conduct weekly meetings to keep track of critical path activities. The schedule should identify float time—the time that a predecessor activity can slip before the delay affects successor activities—so that schedule flexibility can be determined. As a general rule, activities along the critical path typically have the least amount of float time. Total float time is the amount of time flexibility an activity has that will not delay the project’s completion (if everything else goes according to plan). Total float that exceeds a year is unrealistic and should be minimized. Partially The schedule contains 8,600 activities with total float exceeding 400 days (1.5 years) and 669 activities with total float exceeding 1,000 days (3.9 years). Many of the activities with large total float values are tied to completion milestones, rather than to an intermediate successor. A schedule risk analysis should be performed using statistical techniques to predict the level of confidence in meeting a program’s completion date. This analysis focuses not only on critical path activities but also on activities near the critical path, since they can potentially affect program status. Project officials conducted a schedule risk analysis during the summer of 2009. This analysis was performed using statistical techniques and focused on critical path and near-the-critical-path activities. Officials said that this analysis has provided important overall project risk information to management. The schedule should be continually monitored to determine when forecasted completion dates differ from the planned dates, which can be used to determine whether schedule variances will affect downstream work. Individuals trained in critical path method scheduling should be responsible for ensuring that the schedule is properly updated. Maintaining the integrity of the schedule logic is not only necessary to reflect true status, but is also required before conducting a schedule risk analysis. Project officials said that they update the schedule on a weekly basis. In particular, project controls staff are associated with each engineering group and provide a status update on a weekly basis. program’s work breakdown structure, including activities to be performed by both the government and its contractors. The project’s schedule reflects both government and contractor activities, such as the building and testing of cementation equipment, as well as key milestones for measuring progress. The schedule should be planned so that it can meet critical program dates. To meet this objective, activities need to be logically sequenced in the order that they are to be carried out. In particular, activities that must finish prior to the start of other activities (i.e., predecessor activities), as well as activities that cannot begin until other activities are completed (i.e., successor activities), should be identified. By doing so, interdependencies among activities that collectively lead to the accomplishment of events or milestones can be established and used as a basis for guiding work and measuring progress. The schedule should avoid logic overrides and artificial constraint dates that are chosen to create a certain result. Mostly Of 2,066 activities that are currently in progress or have not yet started, 80 are not logically sequenced—that is, the schedule does not identify interdependencies among work activities that form the basis for guiding work and measuring progress. The schedule should realistically reflect how long each activity will take to execute. In determining the duration of each activity, the same rationale, historical data, and assumptions used for cost estimating should be used. Durations should be as short as possible and have specific start and end dates. In particular, durations of longer than 200 days should be minimized. Mostly Ninety-eight of the 2,066 activities that are currently in progress or have not yet started have durations of 100 days or more. While durations should be as short as possible and have specific start and end dates to objectively measure progress, project officials provided a valid rationale for the duration of these activities. The schedule should reflect what resources (e.g., labor, material, and overhead) are needed to do the work, whether all required resources will be available when needed, and whether any funding or time constraints exist. The schedule reflects $336 million in resource costs. The project’s cost baseline is $344 million. According to project officials, they are aware of this discrepancy. They stated that while all of the project resources are reflected in the schedule, a software problem has caused some of these resources to not show up. Project officials are working to correct this software problem. The schedule should be horizontally integrated, meaning that it should link the products and outcomes associated with other sequenced activities. These links are commonly referred to as “handoffs” and serve to verify that activities are arranged in the right order to achieve aggregated products or outcomes. The schedule should also be vertically integrated, meaning that traceability exists among varying levels of activities and supporting tasks and subtasks. Such mapping or alignment among levels enables different groups to work to the same master schedule. Project officials provided evidence that the schedule is sufficiently integrated. Using scheduling software, the critical path—the longest duration path through the sequenced list of key activities— should be identified. The establishment of a program’s critical path is necessary for examining the effects of any activity slipping along this path. Potential problems that might occur along or near the critical path should also be identified and reflected in the scheduling of the time for high-risk activities. A critical path has been established. The critical path dates are driven by the logic of the schedule. The schedule should identify float time—the time that a predecessor activity can slip before the delay affects successor activities—so that schedule flexibility can be determined. As a general rule, activities along the critical path typically have the least amount of float time. Total float time is the amount of time flexibility an activity has that will not delay the project’s completion (if everything else goes according to plan). Total float that exceeds a year is unrealistic and should be minimized. Mostly The schedule contains 1,482 activities that have a float time of over 100 days. However, project officials provided a valid rationale for having activities with large float times. A schedule risk analysis should be performed using statistical techniques to predict the level of confidence in meeting a program’s completion date. This analysis focuses not only on critical path activities but also on activities near the critical path, since they can potentially affect program status. Project officials stated that they conducted a schedule risk analysis using statistical techniques in July 2008 on the baseline schedule. The schedule should be continually monitored to determine when forecasted completion dates differ from the planned dates, which can be used to determine whether schedule variances will affect downstream work. Individuals trained in critical path method scheduling should be responsible for ensuring that the schedule is properly updated. Maintaining the integrity of the schedule logic is not only necessary to reflect true status, but is also required before conducting a schedule risk analysis. Project officials conduct weekly meetings to review and update the project schedule. Appendix IV: Summary Results of Interviews with 22 Utilities 1. How much information have you heard or read about DOE’s MOX fuel program? A great deal of information 2. Does your utility own any reactors that are compatible with AREVA fuel designs? 3. Taking into account your current reactor fleet, what is your utility’s current level of interest in participating in the MOX fuel program? (Choose One) 4. What kinds of reactors owned by your utility do you think would be the most likely candidates for MOX fuel if your utility decided to participate in the MOX fuel program? Please choose only one answer. 7 5. How important is this factor in your assessment of your utility’s current level of interest in participating in the MOX fuel program? 6. If DOE would sell MOX fuel to your utility at a 15% discounted price relative to the market price for uranium fuel, what do you think your utility’s level of interest in participating in the MOX program would be? 7. If DOE would sell MOX fuel to your utility at a 25% discounted price relative to the market price for uranium fuel, what do you think your utility’s level of interest in participating in the MOX program would be? 8. How important is this factor in your assessment of your utility’s current level of interest in participating in the MOX fuel program? 12 9. If DOE would cover the costs associated with reactor modifications for compatibility with MOX fuel, what do you think your utility’s level of interest in participating in the MOX program would be? 10. How important are the costs associated with NRC licensing requirements, in terms of monetary outlays and staff time, to your utility’s current level of interest in participating in the MOX fuel program? 11. If DOE would cover the costs associated with obtaining NRC licenses, what do you think your utility’s level of interest in participating in the MOX program would be? 12. Another factor that may affect your level of interest is the ability to test the quality and safety of MOX fuel at your reactor. How important is this factor in your assessment of your utility’s current level of interest in participating in the MOX fuel program? 15 13. If DOE offered to fund a demonstration program of MOX fuel at your reactor, what do you think your utility’s level of interest in participating in the MOX program would be? 14. Another factor that may affect your level of interest is DOE’s ability to ensure the timely delivery of MOX fuel (i.e. – Delivery occurs at an interval that meets a reactor’s needed timeline to prepare prior to a refueling outage). How important is this factor in your assessment of your utility’s current level of interest in participating in the MOX fuel program? 15. Another factor that may affect your level of interest is DOE’s ability to provide a compatible backup supply of uranium fuel as assurance in case of delays in the delivery of MOX fuel. How important is this factor in your assessment of your utility’s current level of interest in participating in the MOX fuel program? 11 16. Another factor that may affect your level of interest is the storage of MOX fuel at your reactor site for longer than the interval that meets a reactor’s needed timeline prior to a refueling outage. How important is this factor in your assessment of your utility’s current level of interest in participating in the MOX fuel program? 17. Another factor that may affect your level of interest is public opinion regarding the use of MOX fuel. How important is this factor in your assessment of your utility’s current level of interest in participating in the MOX fuel program? 18. DOE’s MOX fuel program relies on annual Congressional appropriations. Another factor that may affect your level of interest is the consistency of funding for the program through 2033. How important is this factor in your assessment of your utility’s current level of interest in participating in the MOX fuel program? 12 19. In addition to the factors described above, are there any other factors or issues that we have not discussed that affected your assessment of your utility’s current interest in participating in the MOX fuel program? Open ended responses are not presented in this appendix. 20. How interested in participating do you think your utility would have to be to actually submit such an expression of interest? 21. The MOX Fuel Fabrication Facility is expected to begin delivery of MOX fuel in 2018 and continue supplying fuel through 2032. How confident are you in DOE’s ability to deliver MOX fuel on time throughout this period? 22. How confident are you in DOE’s ability to ensure that a compatible backup supply of uranium fuel is delivered on time in the case of MOX fuel delays? To be determined. MOX Services’ design control procedures did not require that the method of design verification, or the results, be adequately documented when design verifications were performed. To be determined. MOX Services failed to provide a technical justification for an engineering change request. To be determined. MOX Services failed to include a sequential description of work to be performed in implementing documents. To be determined. September 11, 2009 MOX Services failed to promptly identify, evaluate, correct, and document conditions adverse to quality, including incorrect placement of a floor and failure to document a rebar deficiency in the corrective action program. MOX Services conducted a root cause analysis for the conditions that led to each of the findings in NRC’s September 11, 2009, inspection report and instituted actions, including improving communications between engineering, construction, and quality control personnel; adopting checklists for changes; and adding additional training for engineering personnel. NRC stated that the actions appeared adequate, and that it will verify implementation during later inspections. September 11, 2009 MOX Services failed to perform quality-affecting activities in accordance with approved drawings and specifications. MOX Services conducted a root cause analysis for the conditions that led to each of the findings in NRC’s September 11, 2009, inspection report and instituted actions, including improving communications between engineering, construction, and quality control personnel; adopting checklists for changes; and adding additional training for engineering personnel. NRC stated that the actions appeared adequate, and that it will verify implementation during later inspections. September 11, 2009 MOX Services failed to provide and adequate documented justification for changes to final designs. MOX Services conducted a root cause analysis for the conditions that led to each of the findings in NRC’s September 11, 2009, inspection report and instituted actions, including improving communications between engineering, construction, and quality control personnel; adopting checklists for changes; and adding additional training for engineering personnel. NRC stated that the actions appeared adequate, and that it will verify implementation during later inspections. MOX Services failed to correctly translate applicable requirements into design documents. MOX Services initiated corrective actions to address these issues. Suppliers were found to fail to meet a basic NQA-1 requirement, indicating that MOX Services failed to ensure that services were controlled to ensure conformance with specified technical and QA requirements. NRC determined that MOX Services’ oversight of its contractors was acceptable, despite numerous examples of failures to meet the QA requirements. Testing documentation for two separate tests did not include the required information. MOX Services revised documentation procedure to include the necessary information. On two separate occasions, the contractor failed to incorporate an approved design change in project documents, and later did not verify a field drawing, which resulted in failure to identify that the drawing did not implement design requirements. MOX Services took steps to ensure that documentation was appropriately revised, and added the design change into the corrective action plan to initiate correction before concrete placement. NRC found that some design reviews did not ensure that design inputs were correctly incorporated into field drawings. MOX Services revised the design drawings to match the as-built drawings after completing an analysis of the structure. Contractor failed to identify certain conditions adverse to quality assurance plan requirements, including those related to incorrectly poured concrete. MOX Services placed the matter into its corrective action program and took steps to ensure adequate pouring of concrete. Contractor failed to take corrective action for conditions adverse to quality, including providing adequate resolution to justify the use of reinforcing steel splices that did not meet industry standards. NRC reviewers concluded that MOX Services implemented appropriate actions to control purchase of items from the reinforcing bar vendor. Contractor failed to ensure that numerous pieces of reinforcing bar met industry standards for bend radius. NRC reviewers concluded that MOX Services implemented appropriate actions to control purchase of items from the reinforcing bar vendor. NRC found that MOX Services had not followed quality insurance procedures, including, for example, ensuring that a vendor provided clear instructions for operating a concrete batch plant, which resulted in improperly mixed concrete. MOX Services took over concrete testing and took corrective actions, including revising procedures and bringing in independent experts to make recommendations for improvement. In addition to the individual named above, Daniel Feehan, Assistant Director; Steve Carter; Antoinette Capaccio; Tisha Derricotte; Jennifer Echard; Jason Holliday; and Ben Shouse made key contributions to this report. | The end of the Cold War left the United States with a surplus of weapons-grade plutonium, which poses proliferation and safety risks. Much of this material is found in a key nuclear weapon component known as a pit. The Department of Energy (DOE) plans to dispose of at least 34 metric tons of plutonium by fabricating it into mixed oxide (MOX) fuel for domestic nuclear reactors. To do so, DOE's National Nuclear Security Administration (NNSA) is constructing two facilities--a MOX Fuel Fabrication Facility (MFFF) and a Waste Solidification Building (WSB)--at the Savannah River Site in South Carolina. GAO was asked to assess the (1) cost and schedule status of the MFFF and WSB construction projects, (2) status of NNSA's plans for pit disassembly and conversion, (3) status of NNSA's plans to obtain customers for MOX fuel from the MFFF, and (4) actions that the Nuclear Regulatory Commission (NRC) and DOE have taken to provide independent nuclear safety oversight. GAO reviewed NNSA documents and project data, toured DOE facilities, and interviewed officials from DOE, NRC, and nuclear utilities. The MFFF and WSB projects both appear to be meeting their cost targets for construction, but the MFFF project has experienced schedule delays. Specifically, the MFFF and WSB projects are on track to meet their respective construction cost estimates of $4.9 billion and $344 million. However, the MFFF project has experienced some delays over the past 2 years, due in part to the delivery of reinforcing bars that did not meet nuclear quality standards. Project officials said that they expect to recover from these delays by the end of 2010 and plan for the start of MFFF operations on schedule in 2016. The WSB project appears to be on schedule. NNSA is reconsidering its alternatives for establishing a pit disassembly and conversion capability. However, it seems unlikely that NNSA will be able to establish this capability in time to produce the plutonium feedstock needed to operate the MFFF, due to the amount of time and effort needed to reconsider alternatives and construct a facility as well as the amount of uncertainty associated with NNSA's current plans. NNSA had previously planned to build a stand-alone facility near the MFFF construction site to disassemble pits and convert the plutonium into a form suitable for use by the MFFF. However, NNSA is now considering a plan to combine this capability with another project at an existing facility at the Savannah River Site. NNSA officials could not estimate when the agency will reach a final decision or establish more definitive cost and schedule estimates for the project. However, NNSA's new alternative depends on an aggressive, potentially unrealistic schedule. In addition, NNSA has not sufficiently planned for the maturation of critical technologies to be used in pit disassembly and conversion operations, some of which are being tested at the Los Alamos National Laboratory in New Mexico. NNSA has one potential customer for most of its MOX fuel, but outreach to other utilities may be insufficient. NNSA is in discussions with the Tennessee Valley Authority to provide MOX fuel for five reactors. NNSA plans to offer several incentives to potential customers, including offering to sell MOX fuel at a discount relative to the price of uranium fuel. In interviews with the nation's nuclear utilities, GAO found that while many of the utilities expressed interest in NNSA's proposed incentives, the majority of utilities also expressed little interest in becoming MOX fuel customers. This suggests that NNSA's outreach to utilities may not be sufficient. NRC is currently reviewing the MFFF's license application and has identified several issues related to construction. However, oversight of the MFFF and the WSB by DOE's independent nuclear safety entities has been limited. For example, DOE's Office of Health, Safety, and Security has not conducted any oversight activities or participated in any project reviews of the WSB, despite the WSB's status as a high-hazard nuclear facility. In addition, NNSA's Chief of Defense Nuclear Safety has not conducted any nuclear safety oversight activities for the MFFF project and has not conducted all oversight activities for the WSB project that are required by DOE order. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
DHS serves as the sector-specific agency for 10 of the sectors: information technology; communications; transportation systems; chemical; emergency services; nuclear reactors, material, and waste; postal and shipping; dams; government facilities; and commercial facilities. Other sector-specific agencies are the departments of Agriculture, Defense, Energy, Health and Human Services, the Interior, the Treasury, and the Environmental Protection Agency. (See table 1 for a list of sector-specific agencies and a brief description of each sector). The nine sector-specific plans we reviewed generally met NIPP requirements and DHS’s sector-specific plan guidance; however, the extent to which the plans met this guidance, and therefore their usefulness in enabling DHS to identify gaps and interdependencies across the sectors, varied depending on the maturity of the sector and on how the sector defines its assets, systems, and functions. As required by the NIPP risk management framework (see fig. 1), sector-specific plans are to promote the protection of physical, cyber, and human assets by focusing activities on efforts to (1) set security goals; (2) identify assets, systems, networks, and functions; (3) assess risk based on consequences, vulnerabilities, and threats; (4) establish priorities based on risk assessments; (5) implement protective programs; and (6) measure effectiveness. In addition to these NIPP risk management plan elements outlined above and according to DHS’s sector-specific plan guidance, the plans are also to address the sectors’ efforts to (1) implement a research and development program for critical infrastructure protection and (2) establish a structure for managing and coordinating the responsibilities of the federal departments and agencies—otherwise known as sector-specific agencies—identified in HSPD-7 as responsible for critical-infrastructure protection activities specified for the 17 sectors. Most of the plans included the required elements of the NIPP risk management framework, such as security goals and the methods the sectors expect to use to prioritize infrastructure, as well as to develop and implement protective programs. However, the plans varied in the extent to which they included key information required for each plan element. For example, all of the plans described the threat analyses that the sector conducts, but only one of the plans described any incentives used to encourage voluntary risk assessments, as required by the NIPP. Such incentives are important because a number of the industries in the sectors are privately owned and not regulated, and the government must rely on voluntary compliance with the NIPP. Additionally, although the NIPP called for each sector to identify key protective programs, three of the nine plans did not address this requirement. DHS officials told us that this variance in the plans can, in large part, be attributed to the levels of maturity and cultures of the sectors, with the more mature sectors generally having more comprehensive and complete plans than sectors without similar prior working relationships. For example, the banking and finance and energy sector plans included most of the key information required for each plan element. According to DHS officials, this is a result of these sectors having a history and culture of working with the government to plan and accomplish many of the same activities that are being required for the sector-specific plans. Therefore, these sectors were able to create plans that were more comprehensive and developed than those of less mature sectors, such as the public health and health care and agriculture and food sectors. The plans also varied in how comprehensively they addressed their physical, human, and cyber assets, systems, and functions because sectors reported having differing views on the extent to which they were dependent on each of these assets, systems, and functions. According to DHS’s sector-specific plan guidance, a comprehensive identification of such assets is important because it provides the foundation on which to conduct risk analysis and identify the appropriate mix of protective programs and actions that will most effectively reduce the risk to the nation’s infrastructure. Yet, only one of the plans—drinking water and water treatment—specifically included all three categories of assets. For example, because the communications sector limited its definition of assets to networks, systems, and functions, it did not, as required by DHS’s plan guidance, include human assets in its existing security projects and the gaps it needs to fill related to these assets to support the sector’s goals. In addition, the national monuments and icons plan defined the sector as consisting of physical structures with minimal cyber and telecommunications assets because these assets are not sufficiently critical that damaging or destroying them would interfere with the continued operation of the physical assets. In contrast, the energy sector placed a greater emphasis on cyber attributes because it heavily depends on these cyber assets to monitor and control its energy systems. DHS officials also attributed the difference in the extent to which the plans addressed required elements to the manner in which the sectors define their assets and functions. The plans, according to DHS’s Office of Infrastructure Protection officials, are a first step in developing future protective measures. In addition, these officials said that the plans should not be considered to be reports of actual implementation of such measures. Given the disparity in the plans, it is unclear the extent to which DHS will be able to use them to identify gaps and interdependencies across the sectors in order to plan future protective measures. It is also unclear, from reviewing the plans, how far along each sector actually is in identifying assets, setting priorities, and protecting key assets. DHS officials said that to make this determination, they will need to review the sectors’ annual progress reports, due in this month, that are to provide additional information on plan implementation as well as identify sector priorities. Representatives of 10 of 32 councils said the plans were valuable because they gave their sectors a common language and framework to bring the disparate members of the sector together to better collaborate as they move forward with protection efforts. For example, the government facilities council representative said that the plan was useful because relationships across the sector were established during its development that have resulted in bringing previously disjointed security efforts together in a coordinated way. The banking and finance sector’s coordinating council representative said that the plan was a helpful way of documenting the history, the present state, and the future of the sector in a way that had not been done before and that the plan will be a working document to guide the sector in coordinating efforts. Similarly, an energy sector representative said that the plan provides a common format so that all participants can speak a common language, thus enabling them to better collaborate on the overall security of the sector. The representative also said that the plan brought the issue of interdependencies between the energy sector and other sectors to light and provided a forum for the various sectors to collaborate. DHS’s Office of Infrastructure Protection officials agreed that the main benefit of these plans was that the process of developing them helped the sectors to establish relationships between the private sector and the government and among private sector stakeholders that are key to the success of protection efforts. However, representatives of 8 of the 32 councils said the plans were not useful to their sectors because (1) the plans did not represent a true partnership between the federal and private sectors or were not meaningful to all the industries represented by the sector or (2) the sector had already taken significant protection actions, thus, developing the plan did not add value. The remaining council representatives did not offer views on this issue. Sector representatives for three transportation modes—rail, maritime, and aviation—reported that their sector’s plan was written by the government and that the private sector did not participate fully in the development of the plan or the review process. As a result, the representatives did not believe that the plan was of value to the transportation sector as a whole because it does not represent the interests of the private sector. Similarly, agriculture and food representatives said writing the plan proved to be difficult because of the sector’s diversity and size—more than 2,000,000 farms, one million restaurants, and 150,000 meat processing plants. They said that one of the sector’s biggest challenges was developing a meaningful document that could be used by all of the industries represented. As a result of these challenges, the sector submitted two plans in December 2006 that represented a best effort at the time, but the sector council said it intends to use the remainder of the 2007 calendar year to create a single plan that better represents the sector. In contrast, the coordinating council representative for nuclear reactors, materials, and waste sector said that because the sector’s security has been robust for a long time, the plan only casts the security of the sector in a different light, and the drinking water and water treatment systems sector said that the plan is a “snapshot in time” document for a sector that already has a 30-year history of protection, and thus the plan did not provide added value for the sector. Officials at DHS’s Office of Infrastructure Protection acknowledged that these sectors have a long history of working together and in some cases have been doing similar planning efforts. However, the officials said that the effort was of value to the government because it now has plans for all 17 sectors and it can begin to use the plans to address the NIPP risk management framework. Representatives of 11 of 32 councils said the review process associated with the plans was lengthy. They commented that they had submitted their plans in advance of the December 31, 2006, deadline, but had to wait 5 months for the plan to be approved. Eight of them also commented that while they were required to respond within several days to comments from DHS on the draft plans, they had to wait relatively much longer during the continuing review process for the next iteration of the draft. For example, a representative of the drinking water and water treatment sector said that the time the sector had to incorporate DHS’s comments into a draft of the plan was too short—a few days—and this led the sector to question whether its members were valued partners to DHS. DHS’s Infrastructure Protection officials agreed that the review process had been lengthy and that the comment periods given to sector officials were too short. DHS officials said this occurred because of the volume of work DHS had to undertake and because some of the sector-specific agencies were still learning to operate effectively with the private sector under a partnership model in which the private sector is an equal partner. The officials said that they plan to refine the process as the sector-specific agencies gain more experience working with the private sector. Conversely, representatives from eight of 32 councils said the review process for the plans worked well, and five of these council representatives were complimentary of the support they received from DHS. The remaining council representatives did not offer views on this topic. For example, an information technology (IT) sector coordinating council representative said that the review and feedback process on their plan worked well and that the Office of Infrastructure Protection has helped tremendously in bringing the plans to fruition. However, sector coordinating council representatives for six sectors also voiced concern that the trusted relationships established between the sectors and DHS might not continue if there were additional turnover in DHS, as has occurred in the past. For example, the representative of one council said they had established productive working relationships with officials in the Offices of Infrastructure Protection and Cyber Security and Communications, but were concerned that these relationships were dependent on the individuals in these positions and that the relationships may not continue without the same individuals in charge at DHS. As we have reported in the past, developing trusted partnerships between the federal government and the private sector is critical to ensure the protection of critical infrastructure. Nine of 32 sector representatives said that their preexisting relationships with stakeholders helped in establishing and maintaining their sector councils, and two noted that establishing the councils had improved relationships. Such participation is critical to well-functioning councils. For example, representatives from the dams, energy, and banking and finance sectors, among others, said that existing relationships continue to help in maintaining their councils. In addition, the defense industrial base representatives said the organizational infrastructure provided by the sector councils is valuable because it allows for collaboration. Representatives from the national monuments and icons sector said that establishing the government sector council has facilitated communication within the sector. We also reported previously that long-standing relationships were a facilitating factor in council formation and that 10 sectors had formed either a government council or sector council that addressed critical infrastructure protection issues prior to DHS’s development of the NIPP. As a result, these 10 sectors were more easily able to establish government coordinating councils and sector coordinating councils under the NIPP model. Several councils also noted that the Critical Infrastructure Partnership Advisory Council (CIPAC), created by DHS in March 2006 to facilitate communication and information sharing between the government and the private sector, has helped facilitate collaboration because it allows the government and industry to interact without being open to public scrutiny under the Federal Advisory Committee Act. This is important because previously, meetings between the private sector and the government had to be open to the public, hampering the private sector’s willingness to share information. Conversely, seven sector council representatives reported difficulty in achieving and maintaining sector council membership, thus limiting the ability of the councils to effectively represent the sector. For example, the public health and health care sector representative said that getting the numerous sector members to participate is a challenge, and the government representative noted that because of this, the first step in implementing the sector-specific plan is to increase awareness about the effort among sector members to encourage participation. Similarly, due to the size of the commercial facilities sector, participation, while critical, varies among its industries, according to the government council representative. Meanwhile, the banking and finance sector representatives said that the time commitment for private sector members and council leaders makes participation difficult for smaller stakeholders, but getting them involved is critical to an effective partnership. Likewise, the IT sector representatives said engaging some government members in joint council meetings is a continuing challenge because of the members’ competing responsibilities. Without such involvement, the officials said, it is difficult to convince the private sector representatives of the value of spending their time participating on the council. Additionally, obtaining state and local government participation in government sector councils remains a challenge for five sectors. Achieving such participation is critical because these officials are often the first responders in case of an incident. Several government council representatives said that a lack of funding for representatives from these entities to travel to key meetings has limited state and local government participation. Others stated that determining which officials to include was a challenge because of the sheer volume of state and local stakeholders. DHS Infrastructure Protection officials said that the agency is trying to address this issue by providing funding for state and local participation in quarterly sector council meetings and has created a State, Local and Tribal and Territorial Government Coordinating Council (SLTTGCC)—composed of state, local, tribal, and territorial homeland security advisers—that serves as a forum for coordination across these jurisdictions on protection guidance, strategies, and programs. Eleven of the 32 council representatives reported continuing challenges with sharing information between the federal government and the private sector. For example, six council representatives expressed concerns about the viability of two of DHS’s main information-sharing tools—the Homeland Security Information Network (HSIN) or the Protected Critical Infrastructure Information (PCII) program. We reported in April 2007 that the HSIN system was built without appropriate coordination with other information-sharing initiatives. In addition, in a strategic review of HSIN, DHS reported in April 2007 that it has not clearly defined the purpose and scope of HSIN and that HSIN has been developed without sufficient planning and program management. According to DHS Infrastructure Protection officials, although they encouraged the sectors to use HSIN, the system does not provide the capabilities that were promised, including providing the level of security expected by some sectors. As a result, they said the Office of Infrastructure Protection is exploring an alternative that would better meet the needs of the sectors. In addition, three council representatives expressed concerns about whether information shared under the PCII program would be protected. Although this program was specifically designed to establish procedures for the receipt, care, and storage of critical infrastructure information submitted voluntarily to the government, the representatives said potential submitters continue to fear that the information could be inadequately protected, used for future legal or regulatory action, or inadvertently released. In April 2006, we reported that DHS faced challenges implementing the program, including being able to assure the private sector that submitted information will be protected and specifying who will be authorized to have access to the information, as well as to demonstrate to the critical infrastructure owners the benefits of sharing the information to encourage program participation. We recommended, among other things, that DHS better (1) define its critical-infrastructure information needs and (2) explain how this information will be used to attract more users. DHS concurred with our recommendations. In September 2006 DHS issued a final rule that established procedures governing the receipt, validation, handling, storage, marking, and use of critical infrastructure information voluntarily submitted to DHS. DHS is in the process of implementing our additional recommendations that it define its critical-infrastructure information needs under the PCII program and better explain how this information will be used to build the private sector’s trust and attract more users. To date, DHS has issued a national plan aimed at providing a consistent approach to critical infrastructure protection, ensured that all 17 sectors have organized to collaborate on protection efforts, and worked with government and private sector partners to complete all 17 sector-specific plans. Nevertheless, our work has shown that sectors vary in terms of how complete and comprehensive their plans are. Furthermore, DHS recognizes that the sectors, their councils, and their plans must continue to evolve. As they do and as the plans are updated and annual implementation reports are provided that begin to show the level of protection achieved, it will be important that the plans and reports add value, both to the sectors themselves and to the government as a whole. This is critical because DHS is dependent on these plans and reports to meet its mandate to evaluate whether gaps exist in the protection of the nation’s most critical infrastructure and key resources and, if gaps exist, to work with the sectors to address the gaps. Likewise, DHS must depend on the private sector to voluntarily put protective measures in place for many assets. It will also be important that sector councils have representative members and that the sector-specific agencies have buy-in from these members on protection plans and implementation steps. One step DHS could take to implement our past recommendations to strengthen the sharing of information is for the PCII program to better define its critical infrastructure information needs and better explain how this information will be used to build the private sector’s trust and attract more users. As we have previously reported, such sharing of information and the building of trusted relationships are crucial to the protection of the nation’s critical infrastructure. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions that you or other members of the subcommittee may have at any time. For further information on this testimony, please contact Eileen Larence at (202) 512-8777 or by e-mail at [email protected]. Individuals making key contributions to this testimony include Susan Quinlan, Assistant Director; R. E. Canjar; Landis Lindsey; E. Jerry Seigler; and Edith Sohna. We assessed the sector specific plans (SSPs) using 8 criteria, consisting of 40 key information requirements. We extracted this information from the requirements included in the NIPP as well as on the detailed sector- specific plan guidance issued by DHS. Each criterion reflects a component DHS required for the completion of the SSP. The 8 criteria we used are listed below along with the corresponding 40 key information requirements. Section 1: Sector Profile and Goals 1. Did the sector include physical and human assets as part of its sector profile? 2. Does the SSP identify any regulations or key authorities relevant to the sector that affect physical and human assets and protection? 3. Does the SSP show the relationships between the sector specific agency and the private sector, other federal departments and agencies, and state and local agencies that are either owner/operators of assets or provide a supporting role to securing key resources? 4. Does the SSP contain sector-specific goals? 5. Does the SSP communicate the value of the plan to the private sector, other owners, and operators? Section 2: Identify Assets, Systems, Networks, and Functions 6. Does the SSP include a process for identifying the sector’s assets and functions, both now and in the future? 7. Does the SSP include a process to identify physical and human asset dependencies and interdependencies? 8. Does the SSP describe the criteria being used to determine which assets, systems, and networks are and are not of potential concern? 9. Does the SSP describe how the infrastructure information being collected will be verified for accuracy and completeness? 10. Does the SSP discuss the risk assessment process, including whether the sector is mandated by regulation or are primarily voluntary in nature. 11. Does the SSP address whether a screening process (process to determine whether a full assessment is required) for assets would be beneficial for the sector, and if so, does it discuss the methodologies or tools that would be used to do so? 12. Does the SSP identify how potential consequences of incidents, including worst case scenarios, would be assessed? 13. Does the SSP describe the relevant processes and methodologies used to perform vulnerability assessments? 14. Does the SSP describe any threat analyses that the sector conducts? 15. Does the SSP describe any incentives used to encourage voluntary performance of risk assessments? Section 4: Prioritize Infrastructure 16. Does the SSP identify the party responsible for conducting a risk-based prioritizing of the assets? 17. Does the SSP describe the process, current criteria, and frequency for prioritizing sector assets? 18. Does the SSP provide a common methodology for comparing both physical and human assets when prioritizing a sector’s infrastructure? Section 5: Develop and Implement Protective Programs 19. Does the SSP describe the process that the SSA will use to work with asset owners to develop effective long-term protective plans for the sector’s assets? 20. Does the SSP identify key protective programs (and their role) in the sector’s overall risk management approach? 21. Does the SSP describe the process used to identify and validate specific program needs? 22. Does the SSP include the minimum requirements necessary for the sector to prevent, protect, respond to, and recover from an attack? 23. Does the SSP address implementation and maintenance of protective programs for assets once they are prioritized? 24. Does the SSP address how the performance of protective programs is monitored by the sector-specific agencies and security partners to determine their effectiveness? Section 6: Measure Progress 25. Does the SSP explain how the SSA will collect, verify and report the information necessary to measure progress in critical infrastructure/key resources protection? 26. Does the SSP describe how the SSA will report the results of its performance assessments to the Secretary of Homeland Security? 27. Does the SSP call for the development and use of metrics that will allow the SSA to measure the results of activities related to assets? 28. Does the SPP describe how performance metrics will be used to guide future decisions on projects? 29. Does the SSP list relevant sector-level implementation actions that the SSA and its security partners deem appropriate? Section 7: Research and Development for Critical Infrastructure/Key Resources Protection 30. Does the SSP describe how technology development is related to the sector’s goals? 31. Does the SSP identify those sector capability requirements that can be supported by technology development? 32. Does the SSP describe the process used to identify physical and human sector-related research requirements? 33. Does the SSP identify existing security projects and the gaps it needs to fill to support the sector’s goals? 34. Does the SSP identify which sector governance structures will be responsible for R&D? 35. Does the SSP describe the criteria that are used to select new and existing initiatives? Section 8: Manage and Coordinate SSA Responsibilities 36. Does the SSP describe how the SSA intends to staff and manage its NIPP responsibilities? (e.g., creation of a program management office.) 37. Does the SSP describe the processes and responsibilities of updating, reporting, budgeting, and training? 38. Does the SSP describe the sector’s coordinating mechanisms and structures? 39. Does the SSP describe the process for developing the sector-specific investment priorities and requirements for critical infrastructure/key resource protection? 40. Does the SSP describe the process for information sharing and protection? This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | As Hurricane Katrina so forcefully demonstrated, the nation's critical infrastructures--both physical and cyber--have been vulnerable to a wide variety of threats. Because about 85 percent of the nation's critical infrastructure is privately owned, it is vital that public and private stakeholders work together to protect these assets. The Department of Homeland Security (DHS) is responsible for coordinating a national protection strategy and has promoted the formation of government and private councils for the 17 infrastructure sectors as a collaborating tool. The councils, among other things, are to identify their most critical assets, assess the risks they face, and identify protective measures in sector-specific plans that comply with DHS's National Infrastructure Protection Plan (NIPP). This testimony is based primarily on GAO's July 2007 report on the sector-specific plans and the sector councils. Specifically, it addresses (1) the extent to which the sector-specific plans meet requirements, (2) the council members' views on the value of the plans and DHS's review process, and (3) the key success factors and challenges that the representatives encountered in establishing and maintaining their councils. In conducting the previous work, GAO reviewed 9 of the 17 draft plans and conducted interviews with government and private sector representatives of the 32 councils, 17 government and 15 private sector. Although the nine sector-specific plans GAO reviewed generally met NIPP requirements and DHS's sector-specific plan guidance, eight did not describe any incentives the sector would use to encourage owners to conduct voluntary risk assessments, as required by the NIPP. Most of the plans included the required elements of the NIPP risk management framework. However, the plans varied in how comprehensively they addressed not only their physical assets, systems, and functions, but also their human and cyber assets, systems and functions, a requirement in the NIPP, because the sectors had differing views on the extent to which they were dependent on each of these assets. A comprehensive identification of all three categories of assets is important, according to DHS plan guidance, because it provides the foundation on which to conduct risk analyses and identify appropriate protective actions. Given the disparity in the plans, it is unclear the extent to which DHS will be able to use them to identify security gaps and critical interdependencies across the sectors. DHS officials said that to determine this, they will need to review the sectors' annual reports. Representatives of the government and sector coordinating councils had differing views regarding the value of sector-specific plans and DHS's review of those plans. While 10 of the 32 council representatives GAO interviewed reported that they saw the plans as being useful for their sectors, representatives of eight councils disagreed because they believed the plans either did not represent a partnership among the necessary key stakeholders, especially the private sector or were not valuable because the sector had already progressed beyond the plan. In addition, representatives of 11 of the 32 councils felt the review process was too lengthy, but 8 thought the review process worked well. The remaining council representatives did not offer views on these issues. As GAO reported previously, representatives continued to report that their sector councils had preexisting relationships that helped them establish and maintain their sector councils. However, seven of the 32 representatives reported continuing difficulty achieving and maintaining sector council membership, thus limiting the ability of the councils to effectively represent the sector. Eleven council representatives reported continuing difficulties sharing information between the public and private sectors as a challenge, and six council representatives expressed concerns about the viability of the information system DHS intends to rely on to share information about critical infrastructure issues with the sectors or the effectiveness of the Protected Critical Infrastructure Information program--a program that established procedures for the receipt, care, and storage of information submitted to DHS. GAO has outstanding recommendations addressing this issue, with which DHS generally agreed and is in the process of implementing. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
During World War I, Public Health Service hospitals treated returning veterans and, at the end of the war, several military hospitals were transferred to the Public Health Service to enable it to continue treating injured soldiers. In 1921, those hospitals were transferred to the newly established Veterans’ Bureau. By the early 1990s, the veterans’ health care system had grown into one of our nation’s largest direct providers of health care, comprising more than 172 hospitals. In October 1995, VA began to transform its health care system from a hospital-dominated model to one that provides a full range of health care services. A key feature of this transformation involves the development of community-based, integrated networks of VA and non-VA providers that could deliver health care closer to where veterans live. At that time, about half of all veterans lived more than 25 miles from a VA hospital; about 44 percent of those admitted to VA hospitals lived more than 25 miles away. In making care more proximate to veterans’ homes, VA also began shifting the delivery of health care from high-cost hospital settings to lower-cost outpatient settings. To facilitate VA’s transformation, the Congress passed the Veterans’ Health Care Eligibility Reform Act of 1996, which furnishes tools that VA said were key to a successful transformation, including: new eligibility rules that allow VA to treat veterans in the most appropriate a uniform benefits package to provide a continuum of services; and an expanded ability to purchase services from private providers. Today, VA operates over 800 delivery locations nationwide, including over 600 community-based outpatient clinics and 162 hospitals. VA’s delivery locations are organized into 21 geographic areas, commonly referred to as networks. Each network includes a management office responsible for making basic budgetary, planning, and operating decisions concerning the delivery of health care to its veterans. Each office oversees between 5 and 11 hospitals, as well as many community-based outpatient clinics. To promote more cost-effective use of resources, VA is authorized to share resources with other federal agencies to avoid unnecessary duplication and overlap of activities. VA and the Department of Defense (DOD) have entered into agreements to exchange inpatient, outpatient, and specialty care services as well as support services. Local facilities also have arranged to jointly purchase pharmaceuticals, laboratory services, medical supplies, and equipment. Also, VA has been authorized to enter into agreements with medical schools and their teaching hospitals. Under these agreements, VA hospitals provide training for medical residents, and appoint medical school faculty as VA staff physicians to supervise resident education and patient care. Currently, about 120 medical schools and teaching hospitals have affiliation agreements with VA. About 28,000 medical residents receive some of their training in VA facilities every year. Veterans’ eligibility for health care also has evolved over time. Before 1924, VA health care was available only to veterans who had wounds or diseases incurred during military service. Eligibility for hospital care was gradually extended to war-time veterans with lower incomes and, in 1973, to peace time veterans with lower incomes. By 1986, all veterans were eligible for hospital and outpatient care for service-connected conditions as well as for conditions unrelated to military service. VA implemented an enrollment process in 1998 that was established primarily as a means of prioritizing care if sufficient resources were not available to serve all veterans seeking care. About 6.2 million veterans had enrolled by the end of fiscal year 2002. In contrast, the overall veteran population is estimated to be about 25 million. VA projects a decline in the total veteran population over the next 20 years while the enrolled population is expected to decline more slowly as shown in table 1. In addition to health care, VA provides disability benefits to those veterans with service-connected conditions. Also, VA provides pension benefits to low-income wartime veterans with permanent and total disabilities unrelated to military service. Further, VA provides compensation to survivors of service members who died while on active duty. Disabled veterans are entitled to cash benefits whether or not employed and regardless of the amount of income earned. The cash benefit level is based on the percentage evaluation, commonly called the “disability rating,” that represents the average loss in earning capacity associated with the severity of physical and mental conditions. VA uses its Schedule for Rating Disabilities to determine which disability rating to assign to a veteran’s particular condition. VA’s ratings are in 10 percent increments, from 0 to 100 percent. Although VA generally does not pay disability compensation for disabilities rated at 0 percent, such a rating would make veterans eligible for other benefits, including health care. About 65 percent of veterans receiving disability compensation have disabilities rated at 30 percent or lower; about 8 percent are 100 percent disabled. Basic monthly payments range from $104 for a 10 percent disability to $2,193 for a 100 percent disability. To process claims for these benefits, VA operates 57 regional offices. These offices made almost 800,000 rating-related decisions in fiscal year 2002. Regional office personnel develop claims, obtain the necessary information to evaluate claims, and determine whether to grant benefits. In doing so, they consider veterans’ military service records, medical examination and treatment records from VA health care facilities, and treatment records from private providers. Once claims are developed, the claimed disabilities are evaluated, and ratings are assigned based on degree of disability. Veterans with multiple disabilities receive a single, composite rating. For veterans claiming pension eligibility, the regional office also determines if the veteran served in a period of war, is permanently and totally disabled for reasons unrelated to military service, and meets the income thresholds for eligibility. Over the past several years, VA has done much to ensure that veterans have greater access to health care. Despite this, travel times and waiting times are still problems. Another problem faced by aging veterans is potentially inequitable access to nursing home care. The substantial increase in VA health care delivery locations has enhanced access for enrolled veterans in need of primary care, although many still travel long distances for primary care. In addition, many who need to consult with specialists or require hospitalization often travel long distances to receive care. Nationwide, for example, more than 25 percent of veterans enrolled in VA health care—over 1.7 million—live over 60 minutes driving time from a VA hospital. These veterans would have to travel a long distance if they require admissions or consultations with specialists, such as urologists or cardiologists, located at the closest VA hospitals. In October 2000, VA established the Capital Asset Realignment for Enhanced Services (CARES) program, which has a goal of improving veterans’ access to acute inpatient care, primary care, and specialty care. CARES is intended to identify how well the geographic distribution of VA health care resources matches projected needs and the shifts necessary to better align resources and needs. Toward that end, VA has divided, for analytical purposes, its 21 networks into 76 geographic areas—groups of counties—in order to determine the extent to which enrollees’ travel times exceed VA’s access standards. For example, as part of CARES, VA has mandated that the 21 network directors identify ways to ensure that at least 65 percent of the veterans in their areas are within VA’s access standards for hospital care—60 minutes for veterans residing in urban counties, 90 minutes for those in rural counties, and 120 minutes for those in highly rural counties. VA has identified 25 areas that do not meet this 65 percent target. In these areas, over 900,000 enrolled veterans have travel times that exceed VA’s access standards. In addition, as part of CARES, VA identified 51 other areas where access enhancements may be addressed at the discretion of network directors, given that at least 65 percent of all enrolled veterans in those areas have travel times that meet VA’s standard. In these areas, about 875,000 enrolled veterans have travel times that exceed VA’s standards. By contrast, VA has not mandated that network directors enhance access for veterans who travel long distances to consult with specialists. Unlike hospital care, VA has not established standards for acceptable travel times for specialty care. Currently, nearly 2 million enrolled veterans live more than 60 minutes driving time from specialists located at the closest VA hospital. When considering ways to enhance access for veterans, VA network directors may consider three basic options: construct a new VA-owned and operated delivery location; negotiate a sharing agreement with another federal entity, such as a DOD facility; or contract with nonfederal health care providers. Shifting the delivery of health care closer to where veterans live may have significant ramifications for other stakeholders, such as medical schools. For example, within the 76 areas, there are smaller geographic areas that contain large concentrations of enrollees outside VA’s access standards—10,000 or more—who live closer to non- VA hospitals than they do to the nearest VA hospitals. Such enrolled veterans could account for significant portions of the hospital workload at the nearest VA delivery locations. Therefore, a shifting of this workload closer to veterans’ residences could reduce the size of residency training opportunities at existing VA delivery locations. Enhancing veterans’ access can also have significant ramifications regarding the use of VA’s existing resources. Currently, VA has most of its resources dedicated to costs associated with its existing hospitals and other infrastructure, including clinical and support staff, at its major health care delivery locations. Reducing veterans’ travel times through contracting with providers in local communities or other options could reduce demand for services at VA’s existing, more distant delivery locations. Efficient operation of those locations could become more difficult given the smaller workloads in relation to the operating costs of existing hospitals. We also have found that excessive waiting times for VA outpatient care persist—a situation that we have reported on for the last decade. For example, in August 2001, we reported that veterans frequently wait longer than 30 days—VA’s access standard—for appointments with specialists at VA delivery locations in Florida and other areas of the country. More recently, a Presidential task force reported in its July 2002 interim report that veterans are finding it increasingly difficult to gain access to VA care in selected geographic regions. For example, the task force found that the average waiting time for a first outpatient appointment in Florida, which has a large and growing veteran population, is over a year. Although there is general consensus that waiting times are excessive, we reported, and VA agreed, that its data did not reliably measure the scope of the problem. To improve its data, VA is in the process of developing an automated system to more systematically measure waiting times. VA has also taken several actions to mitigate the impact of long waiting times, including limiting enrollment of lower priority veterans and granting priority for appointments to certain veterans with service-connected disabilities. VA faces an impending challenge, however, reducing the length of times veterans wait for appointments. Specifically, VA’s current projections of acute health care workload indicate a surge in demand for acute health care services over the next 10 years. For example, specialty outpatient demand nationwide is expected to almost double by fiscal year 2012. VA’s long-term care infrastructure, including nursing homes it operates, was developed when the concentration of veteran population was distributed differently by region. Consequently, the location of VA’s current infrastructure may not provide equitable access across the country. In addition, when VA developed its long-term care infrastructure, it relied more on nursing home care and less on home and community- based services than current practice. To help update VA’s long-term care policy, the Federal Advisory Committee on the Future of VA Long-Term Care recommended in 1998 that VA maintain its nursing home capacity at the level of that time but meet the growing veteran demand for long term care by greatly expanding home and community-based service capacity. The House Committee on Veterans’ Affairs has expressed concern that VA needs to maintain its nursing home capacity workload at 1998 levels. VA currently operates its own nursing home care units in 131 locations, according to VA headquarters officials. In addition, it pays for nursing home care under contract in community nursing homes. VA also pays part of the cost of care for veterans at state veterans’ nursing homes and in addition pays a portion of the construction costs for some state veterans’ nursing homes. In all these settings combined, VA’s nursing home workload—average daily census—has declined by more than 1,800 since 1998. See table 2. The biggest decline has been in community nursing home care where the average daily census was 31 percent less in 2002 than in 1998. Average daily census in VA-operated nursing homes also declined by 11 percent during this period. A 9 percent increase in state veterans’ nursing homes’ average daily census offsets some of the decline in average daily census in community and VA-operated nursing homes. VA headquarters officials told us that the decline in nursing home average daily census could be the result of a number of factors. These factors include providing more emphasis on shorter-term care for post-acute care rehabilitation, providing more home and community-based services to obviate the need for nursing home care, assisting veterans to obtain placement in community nursing homes where care is financed by other payers, such as Medicaid, when appropriate, and difficulty recruiting enough nursing staff to operate all beds in some VA-operated nursing homes. VA policy provides networks broad discretion in deciding what nursing home care to offer those patients that VA is not required to provide nursing home care to under the provisions of the Veterans Millennium Health Care and Benefits Act of 1999. Networks’ use of this discretion appears to result in inequitable access to nursing home care. For example, some networks have policies to provide long-term nursing home care to these veterans who need such care if resources allow, while other networks do not have such policies. As a result, these veterans who need long-term nursing home care may have access to that care in some networks but not others. This is significant because about two-thirds of VA’s current nursing home users are recipients of discretionary nursing home care. VA intended to address veterans’ access to nursing home care as part of its larger CARES initiative to project future health care needs and determine how to ensure equitable access. However, initial projections of nursing home need exceeded VA’s current nursing home capacity. VA said that the projections did not reflect its long-term care policy and decided not to include nursing home care in its CARES initiative. Instead, VA officials told us that they have developed a separate process to provide projections for nursing home, and home and community-based services needs. These officials expect that new projections will be developed for consideration by the Under Secretary for Health by July 2003. VA officials also told us that VA will use this information in its strategic planning initiatives to address nursing home and other long-term care issues at the same time that VA implements its CARES initiatives. Because VA has not systematically examined its nursing home policies and access to care, veterans have no assurance that VA’s $2 billion nursing home program is providing equitable access to care to those who need it. This is particularly important given the aging of the veteran population. The veteran population most in need of nursing home care—veterans 85 years old or older—is expected to increase from almost 640,000 to over 1 million by 2012 and remain at about that level through 2023. Until VA develops a long-term care projection model consistent with its policy, VA will not be able to determine if its nursing home care units in 131 locations and other nursing home care services it pays for provide equitable access to veterans now or in the future. In recent years, VA has made an effort to realign its capital assets, primarily buildings, to better serve veterans’ needs as well as institute other needed efficiencies. Despite this, many of VA’s buildings remain underutilized and patient support services are not always provided efficiently. VA could make better use of its resources by taking steps to partner with other public and private providers, purchase care from such providers, replace obsolete assets with modern ones, consolidate duplicative care provided by multiple locations serving the same geographic areas where it would be cost effective to do so, and assess various management options to improve the efficiency of patient support services. VA has a large and aged infrastructure, which is not well aligned to efficiently meet veterans’ needs. In recent years, as a result of new technology and treatment methods, VA has shifted delivery from inpatient to outpatient settings in many instances and shortened lengths of stay when hospitalization was required. Consequently, VA has excess inpatient capacity at many locations. For example, in August 1999, we reported that VA owned about 4,700 buildings, over 40 percent of which had operated for more than 50 years, and almost 200 of which were built before 1900. Many organizations in the facilities management environment consider 40 to 50 years to be the useful life of a building. Moreover, VA used fewer than 1,200 of these buildings (about one-fourth of the total) to deliver health care services to veterans. The rest were used primarily to support health care activities, although many had tenants or were vacant. In addition, most delivery locations had mission-critical buildings that VA considered functionally obsolete. These included, for example, inpatient rooms not up to industry standards concerning patient privacy; outpatient clinics with undersized examination rooms; and buildings with safety concerns, such as vulnerability to earthquakes. As part of VA’s transformation, begun in 1995, its networks implemented hundreds of management initiatives that significantly enhanced their overall efficiency and effectiveness. The success of these strategies— shifting inpatient care to more appropriate settings, establishing primary care in community clinics, and consolidating services in order to achieve economies of scale—significantly reduced utilization at most of VA’s inpatient delivery locations. For example, VA operated about 73,000 hospital beds in fiscal year 1995. In 1998, veterans used on average fewer than 40,000 hospital beds per day, and by 2001 usage had further declined to about 16,000 hospital beds per day. In 1999, we concluded that VA’s existing infrastructure could be the biggest obstacle confronting VA’s ongoing transformation efforts. During a hearing in 1999 before this Committee’s Subcommittee on Health, we pointed out that, although VA was addressing some realignment issues, it did not have a plan in place to identify buildings that are no longer needed to meet veterans’ health care needs. We recommended that VA develop a market-based plan for restructuring its delivery of health care in order to reduce funds spent on underutilized or inefficient buildings. In turn those funds could be reinvested to better serve veterans’ needs by placing health care resources closer to where they live. To do so, we recommended that VA comply with guidance from the Office of Management and Budget. The guidance suggested that market-based assessments include (1) assessing a target population’s needs, (2) evaluating the capacity of existing assets, (3) identifying any performance gaps (excesses or deficiencies), (4) estimating assets’ life cycle costs, and (5) comparing such costs to other alternatives for meeting the target population’s needs. Alternatives include (1) partnering with other public or private providers, (2) purchasing care from such providers, (3) replacing obsolete assets with modern ones, or (4) consolidating services duplicated at multiple locations serving the same market. During the 1999 hearing, the subcommittee chairman urged VA to implement our recommendations and VA agreed to do so. In August 2002, VA announced the results of a pilot study in its Great Lakes network, which includes Chicago and other locations. VA selected three realignment strategies in this network – consolidation of services at existing locations, opening of new outpatient clinics, and closure of one inpatient location. Currently, VA is analyzing ways to realign health care delivery in its 20 remaining networks. VA expects to issue its plans by the end of 2003. To date, VA has projected veterans’ demand for acute health care services through fiscal year 2022, evaluated available capacity at its existing delivery locations, and targeted geographic areas where alternative delivery strategies could allow VA to operate more efficiently and effectively while ensuring access consistent with its standards for travel time. For example, VA has the opportunity to achieve efficiencies through economies of scale in 30 geographic areas where two or more major health care delivery locations that are in close proximity provide duplicative inpatient and outpatient health care services. VA may also achieve similar efficiencies in 38 geographic areas where two or more tertiary care delivery locations are in close proximity. VA considers delivery locations to be in close proximity if they are within 60 miles of one another for acute care and within 120 miles for tertiary care. In addition, VA may achieve additional efficiencies in 28 geographic areas where existing delivery locations have low acute medicine workloads, which VA has defined as serving less than 40 hospital patients per day. VA also identified more than 60 opportunities for partnering with the DOD to better align the infrastructure of both agencies. VA faces difficult challenges when attempting to improve service delivery efficiencies. For example, service consolidations can have significant ramifications for stakeholders, such as medical schools and unions, primarily due to shifting of workload among locations and workforce reductions. Understandably, medical schools are reluctant to change long- standing business relationships involving, among other things, training of medical residents. For example, VA tried for 5 years to reach agreement on how to consolidate clinical services at two of Chicago’s four major health care delivery locations before succeeding in August 2002. This is because such restructuring required two medical schools to use the same location to train residents, a situation that neither supported. Unions, too, have been reluctant to support planning decisions that result in a restructuring of services. This is because operating efficiencies that result from the consolidation of clinical services into a single location could also result in staffing reductions for such support services as grounds maintenance, food preparation, and housekeeping. For example, as part of its ongoing transformation, VA proposed to consolidate food preparation services of 9 delivery locations into a single location in New York City in order to operate more efficiently. Two unions’ objections, however, slowed VA’s restructuring, although VA and the unions subsequently agreed on a way to complete the restructuring. VA also faces difficult decisions concerning the need for and sizing of capital investments, especially in locations where future workload may increase over the short term before steadily declining. In large part, such declines are attributable to the expected nationwide decrease in the overall veteran population by more than one-third by 2030; in some areas, veteran population declines are expected to be steeper. It may be in VA’s best interests to partner with other public or private providers for services to meet veterans’ demands rather than risk making a major capital investment that would be underutilized in the latter stages of its useful life. In cases when VA’s realignment results in buildings that are no longer needed to meet veterans’ health care needs, VA faces other difficult decisions regarding whether to retain or dispose of these buildings. VA has several options, including leasing, demolition, or transferring buildings to the General Services Administration (GSA), which has the authority to dispose of excess or surplus federal property. When there is no leasing potential, VA faces potentially high demolition costs as well as uncertain site preparation costs associated with the transfer of buildings to GSA. Given that such costs involve the use of health care resources, ensuring that disposal decisions are based on systematic analyses of costs and benefits to veterans poses another realignment challenge. The challenge of dealing with a misaligned infrastructure is not unique to VA. In fact, we identified federal real property management as a high-risk area in January 2003. For the federal government overall and VA in particular, technological advancements, changing public needs, opportunities for resource sharing, and security concerns will call for a new way of thinking about real property needs. In VA’s case, it has recognized the critical need to better manage its buildings and land and is in the process of implementing CARES to do so. VA has the opportunity to lead other federal agencies with similar real property challenges. However, VA and other agencies have in common persistent problems, including competing stakeholder interests in real property decisions. Resolving these problems will require high-level attention and effective leadership. As VA continues to transform itself from an inpatient- to an outpatient- based health care system, it must find more efficient, systemwide ways of providing patient care support services, such as consolidation of services and the use of competitive sourcing. For example, VA’s shift in emphasis from inpatient to outpatient health care delivery has significantly reduced the need for inpatient care support services, such as food and laundry services. To make better use of resources, some VA inpatient facilities have consolidated food production locations, used lower-cost Veterans Canteen Service (VCS) workers instead of higher-paid Nutrition and Food Service workers to provide inpatient food services, or contracted out for the provision of these services. Some VA facilities have also consolidated two or more laundries into a single location, contracted for labor to operate VA laundries, or contracted out laundry services to commercial organizations. VA needs to systematically explore the further use of such options across its health care system. In November 2000, we recommended that VA conduct studies at all of its food and laundry service locations to identify and implement the most cost-effective way to provide these services at each location. At that time, we identified 63 food production locations that could be consolidated into 29, saving millions of dollars annually. We estimated that VA could potentially save millions of dollars by consolidating both food and laundry production locations. VA may also be able to reduce its food and laundry service costs at some facilities through competitive sourcing—through which VA would determine whether it would be more cost-effective to contract out these services or provide them in-house. VA must ensure, however, that, if a decision to contract for services is made, contract terms on payments and service quality standards will continue to be met. For example, we found that weaknesses in the monitoring of VA’s Albany, New York laundry contract appear to have resulted in overpayments, reducing potential savings. In August 2002, VA issued a directive establishing policy and responsibilities for its networks to follow in implementing a competitive sourcing analysis to compare the cost of contracting and the cost of in- house performance to determine who can do the work most cost effectively. VA has announced that, as part of the President’s Management Agenda, it will complete studies of competitive sourcing of 55,000 positions by 2008. VA plans to complete studies of competitive sourcing for all its laundry positions by the end of calendar year 2003. Similar initiatives for food services and other support services are in the planning stages at VA. Overall, VA’s plan for competitive sourcing shows promise. However, VA has not yet established a timeline for implementing an assessment of competitive sourcing and the other options we recommended for all its inpatient food service locations. Until VA completes these assessments and takes action to reduce costs, it may be paying more for inpatient food services than required and as a result have fewer resources available for the provision of health care to veterans. We recognize that one of the options we recommended that VA assess, the competitive sourcing process set forth in the Office of Management and Budget (OMB) Circular A-76, historically has been difficult to implement. Specifically, there are concerns in both the public and private sectors regarding the fairness of the competitive sourcing process and the extent to which there is a “level playing field” for conducting public-private competitions. It was against this backdrop that the Congress in 2001, mandated that the Comptroller General establish a panel of experts to study the process used by the government to make sourcing decisions. The Commercial Activities Panel that the Comptroller convened conducted a yearlong study, and heard repeatedly about the importance of competition and its central role in fostering economy, efficiency, and continuous performance improvement. The panel made a number of recommendations for improving sourcing policies and processes. As part of the administration’s efforts to implement the recommendations of the Commercial Activities Panel, OMB published proposed changes to Circular A-76 for public comment in November 2002. In our comments on the proposal to the Director of OMB this past January, we noted the absence of a link between sourcing policy and agency missions, unnecessarily complicated source selection procedures, certain unrealistic time frames, and insufficient guidance on calculating savings. The administration is now considering those and other comments as it finalizes the revisions to the Circular. Significant program design and management challenges hinder VA’s ability to provide meaningful and timely support to disabled veterans and their families. VA relies on outmoded medical and economic disability criteria. VA also has difficulty providing veterans with accurate, consistent, and timely benefit decisions, although recent actions have improved timeliness. In assessing veterans’ disabilities, VA remains mired in concepts from the past. VA’s disability programs base eligibility assessments on the presence of medically determinable physical and mental impairments. However, these assessments do not always reflect recent medical and technological advances, and their impact on medical conditions that affect the ability to work. VA’s disability programs remain grounded in an approach that equates certain medical impairments with the incapacity to work. Moreover, advances in medicine and technology have reduced the severity of some medical conditions and allowed individuals to live with greater independence and function more effectively in work settings. Also, VA’s rating schedule updates have not incorporated advances in assistive technologies—such as advanced wheelchair design, a new generation of prosthetic devices, and voice recognition systems—that afford some disabled veterans greater capabilities to work. VA has made some progress in updating its rating schedule to reflect medical advances. Revisions generally consist of (1) adding, deleting, and reorganizing medical conditions in the Schedule for Rating Disabilities, (2) revising the criteria for certain qualifying conditions, and (3) wording changes for clarification or reflection of current medical terminology. However, VA’s effort to update its disability criteria within the context of current program design has been slow and is insufficient to provide the up-to-date criteria VA needs to ensure meaningful and equitable benefit decisions. Completing an update of the schedule for one body system has generally taken 5 years or more; the schedule for the ear and other sense organs took 8 years. In August 2002, we recommended that VA use its annual performance plan to delineate strategies for and progress in updating its disability rating schedule. VA did not concur with our recommendation because it believes that developing timetables for future updates to the rating schedule is inappropriate while the initial review is ongoing. In addition, VA’s disability criteria have not kept pace with changes in the labor market. The nature of work has changed in recent decades as the national economy has moved away from manufacturing-based jobs to service- and knowledge-based employment. These changes have affected the skills needed to perform work and the settings in which work occurs. For example, advancements in computers and automated equipment have reduced the need for physical labor. However, the percentage ratings used in VA’s Schedule for Rating Disabilities are primarily based on physicians’ and lawyers’ estimates made in 1945 about the effects that service- connected impairments have on the average individual’s ability to perform jobs requiring manual or physical labor. VA’s use of a disability schedule that has not been modernized to account for labor market changes raises questions about the equity of VA’s benefit entitlement decisions; VA could be overcompensating some veterans, while under-compensating or denying compensation entirely to others. In January 1997, we suggested that the Congress consider directing VA to determine whether the ratings for conditions in the schedule correspond to veterans’ average loss in earnings due to these conditions and adjust disability ratings accordingly. Our work demonstrated that there were generally accepted and widely used approaches to statistically estimate the effect of specific service-connected conditions on potential earnings. These estimates could be used to set disability ratings in the schedule that are appropriate in today’s socio-economic environment. In August 2002, we recommended that VA use its annual performance plan to delineate strategies for and progress in periodically updating labor market data used in its disability determination process. VA did not concur with our recommendation because it does not plan to perform an economic validation of its disability rating schedule, or to revise the schedule based on economic factors. According to VA, the schedule is medically based; represents a consensus among stakeholders in the Congress, VA, and the veteran community; and has been a valid basis for equitably compensating disabled veterans for many years. Even if VA’s schedule updates were completed more quickly, they would not be enough to overcome program design limitations in evaluating disabilities. Because of the limited role of treatment in VA disability programs’ statutory and regulatory design, its efforts to update the rating schedule would not fully capture the benefits afforded by treatment advances and assistive technologies. Current program design limits VA’s ability to assess veterans’ disabilities under corrected conditions, such as the impact of medications on a veteran’s ability to work despite a severe mental illness. In August 2002, we recommended that VA study and report to the Congress on the effects that a comprehensive consideration of medical treatment and assistive technologies would have on its disability programs’ eligibility criteria and benefit package. This study would include estimates of the effects on the size, cost, and management of VA’s disability programs and other relevant VA programs; and would identify any legislative actions needed to initiate and fund such changes. VA did not concur with our recommendation because it believes this would represent a radical change from the current programs, and it questioned whether stakeholders in the Congress and the veterans’ community would accept such a change. VA’s disability program challenges are not unique. For example, the Social Security Administration’s (SSA) disability programs remain grounded in outmoded concepts of disability. Like VA, SSA has not updated its disability criteria to reflect the current state of science, medicine, technology and labor market conditions. Thus, SSA also needs to reexamine the medical and vocational criteria it uses to determine whether individuals are eligible for benefits. Even if VA brought its disability criteria up to date, it would continue to face challenges in ensuring quality and timely decisions, including ensuring that veterans get consistent decisions—that is, comparable decisions on benefit entitlement and rating percentage—regardless of the regional office making the decisions. VA has made some progress in improving disability program administration, but much remains to be done before VA has a system that can sustain production of accurate, consistent, and timely decisions. VA is making changes that will allow it to better identify accuracy problems at the national, regional office, and individual employee levels. In turn, this will allow VA to identify underlying causes of inaccuracies and target corrective actions, such as additional training. In response to our March 1999 recommendation, VA has centralized accuracy reviews under its Systematic Technical Accuracy Review (STAR) program to meet generally applicable government standards on segregation of duties and organizational independence. Also, the STAR program began reviewing more decisions in fiscal year 2002, with the intent of obtaining statistically valid accuracy data at the regional office level; regional office-level accuracy goals have been incorporated into regional directors’ performance standards. Further, VA is developing a system to measure the accuracy of individual employees’ work; this measurement is tied to employee performance evaluations. While VA has made changes to improve accuracy, it continues to face challenges in ensuring consistent claims decisions. In August 2002, we recommended that VA establish a system to regularly assess and measure the degree of consistency across all levels of VA claims adjudication. While VA agreed that consistency is an important goal, it did not fully respond to our recommendation regarding consistency because it did not describe how it would measure consistency and evaluate progress in reducing any inconsistencies it may find. Instead, VA said that consistency is best achieved through comprehensive training and communication among VA components involved in the adjudication process. We continue to believe that VA will be unable to determine the extent to which such efforts actually improve consistency of decision-making across all levels of VA adjudication now and over time. VA’s major focus over the past 2 years has been on producing more timely decisions for veterans, and it has made significant progress in improving timeliness and reducing the backlog of claims. The Secretary established the VA Claims Processing Task Force, which in October 2001 made specific recommendations to relieve the veterans’ claims backlog and make claims processing more timely. The task force observed that the work management system in many regional offices contributed to inefficiency and an increased number of errors. The task force attributed these problems primarily to the broad scope of duties performed by regional office staff—in particular, veterans service representatives (VSR). For example, VSRs were responsible for both collecting evidence to support claims and answering claimants’ inquiries. Based on the task force’s recommendations, VA implemented its claims process improvement (CPI) initiative in fiscal year 2002. Under this initiative, regional office claims processing operations were reorganized around specialized teams to handle specific stages of the claims process. For example, regional offices have teams devoted specifically to claims development, that is, obtaining evidence needed to evaluate claims. Also, VA focused on increasing production of rating-related decisions to help reduce inventory and, in turn, improve timeliness. In fiscal years 2001 and 2002, VA hired and trained hundreds of new claims processing staff. VA also set monthly production goals for fiscal year 2002 for each of its regional offices, incorporating these goals into regional office directors’ performance standards. VA completed almost as many decisions in the first half of 2003 (404,000) than in all of fiscal year 2001 (481,000). This increase in production has contributed to a significant inventory reduction; on March 31, 2003, the rating-related inventory was about 301,000 claims, down from about 421,000 at the end of fiscal year 2001. Meanwhile, rating-related decisions timeliness has been improving recently; an average of 199 days for the first half of fiscal year 2003, down from an average of 223 days in fiscal year 2002. While VA has made progress in getting its workload under control and improving timeliness, it will be challenged to sustain this performance. Moreover, it will be difficult to cope with future workload increases due to factors beyond its control, such as future military conflicts, court decisions, legislative mandates, and changes in the filing behavior of veterans. VA is not alone in facing these challenges; SSA is also challenged to improve its ability to provide accurate, consistent, and timely disability decisions to program applicants. For example, after failing in its attempts since 1994 to redesign a more comprehensive quality assurance system, SSA has recently begun a new quality management initiative. Also, SSA has taken steps to provide training and enhance communication to improve the consistency of decisions, but variations in allowances rates continue and a significant number of denied claims are still awarded on appeal. SSA has recently implemented several short-term initiatives not requiring statutory or regulatory changes to reduce processing times but is still evaluating strategies for longer-term solutions. More dramatic gains in timeliness and inventory reduction might require program design changes. For example, in 1996, the Veterans’ Claims Adjudication Commission noted that most disability compensation claims are repeat claims—such as claims for increased disability percentage— and most repeat claims were from veterans with less severe disabilities. The Commission questioned whether concentrating processing resources on these claims, rather than on claims by more severely disabled veterans, was consistent with program intent. Another possible program design change might involve assigning priorities to the processing of claims. For example, claims from veterans with the most severe disabilities and combat-disabled veterans could receive the highest priority attention. Program design changes, including those to address the Commission’s concerns, might require legislative actions. In addition to program design changes, outside studies of VA’s disability claims process identified potential advantages to restructuring VA’s system of 57 regional offices. In its January 1999 report, the Congressional Commission on Servicemembers and Veterans Transition Assistance stated that some regional offices might be so small that their disproportionately large supervisory overhead unnecessarily consumes personnel resources. Similarly, in its 1997 report, the National Academy of Public Administration stated VA should be able to close a large number of regional offices and achieve significant savings in administrative overhead costs. Apart from the issue of closing regional offices, the Commission highlighted a need to consolidate disability claims processing into fewer locations. VA has consolidated its education assistance and housing loan guaranty programs into fewer than 10 locations, and the Commission encouraged VA to take similar action in the disability programs. VA proposed such a consolidation in 1995 and in that proposal enumerated several potential benefits, such as allowing VA to assign the most experienced and productive adjudication officers and directors to the consolidated offices; facilitating increased specialization and as-needed expert consultation in deciding complex cases; improving the completeness of claims development, the accuracy and consistency of rating decisions, and the clarity of decision explanations; improving overall adjudication quality by increasing the pool of experience and expertise in critical technical areas; and facilitating consistency in decisionmaking through fewer consolidated claims-processing centers. VA has already consolidated some of its pension workload (specifically, income and eligibility verifications) at three regional offices. Also, VA has consolidated at its Philadelphia regional office dependency and indemnity compensation claims by survivors of servicemembers who died on active duty, including those who died during Operation Enduring Freedom and Operation Iraqi Freedom. Mr. Chairman, this concludes my prepared statement. I will be happy to answer any questions that you or Members of the Committee may have. For further information, please contact me at (202) 512-7101. Individuals making key contributions to this testimony include Paul R. Reynolds, James C. Musselwhite, Jr., Irene P. Chu, Pamela A. Dooley, Cherie’ M. Starck, William R. Simerl, Richard J. Wade, Thomas A. Walke, Cheryl A. Brand, Kristin M. Wilson, Greg Whitney, and Daniel Montinez. VA Health Care: Improved Planning Needed for Management of Excess Real Property. GAO-03-326. Washington, D.C.: January 29, 2003. High-Risk Series: An Update. GAO-03-119. Washington, D.C.: January 1, 2003. High-Risk Series: Federal Real Property. GAO-03-122. Washington, D.C.: January 1, 2003. Major Management Challenges and Program Risks: Department of Veterans Affairs. GAO-03-110. Washington, D.C.: January 1, 2003. Veterans’ Benefits: Quality Assurance for Disability Claims and Appeals Processing Can Be Further Improved. GAO-02-806. Washington, D.C.: August 16, 2002. SSA and VA Disability Programs: Re-Examination of Disability Criteria Needed to Help Ensure Program Integrity. GAO-02-597. Washington, D.C.: August 9, 2002. VA Long-Term Care: The Availability of Noninstitutional Services Is Uneven. GAO-02-652T. Washington, D.C.: April 25, 2002. VA Long-Term Care: Implementation of Certain Millennium Act Provisions Is Incomplete, and Availability of Noninstitutional Services Is Uneven. GAO-02-510R. Washington, D.C.: March 29, 2002. VA Health Care: More National Action Needed to Reduce Waiting Times, but Some Clinics Have Made Progress. GAO-01-953. Washington, D.C.: August 31, 2001. VA Health Care: Community-Based Clinics Improve Primary Care Access. GAO-01-678T. Washington, D.C.: May 2, 2001. Inadequate Oversight of Laundry Facility at the Department of Veterans Affairs Albany, New York, Medical Center. GAO-01-207R. Washington, D.C.: November 30, 2000. VA Health Care: Expanding Food Service Initiatives Could Save Millions. GAO-01-64. Washington, D.C.: November 30, 2000. VA Laundry Service: Consolidations and Competitive Sourcing Could Save Millions. GAO-01-61. Washington, D.C.: November 30, 2000. Veterans’ Health Care: VA Needs Better Data on Extent and Causes of Waiting Times. GAO/HEHS-00-90. Washington, D.C.: May 31, 2000. VA and Defense Health Care: Evolving Health Care Systems Require Rethinking of Resource Sharing Strategies. GAO/HEHS-00-52. Washington, D.C.: May 17, 2000. VA Health Care: VA Is Struggling to Address Asset Realignment Challenges. GAO/T-HEHS-00-88. Washington, D.C.: April 5, 2000. VA Health Care: Improvements Needed in Capital Asset Planning and Budgeting. GAO/HEHS-99-145. Washington, D.C.: August 13, 1999. VA Health Care: Challenges Facing VA in Developing an Asset Realignment Process. GAO/T-HEHS-99-173. Washington, D.C.: July 22, 1999. Veterans’ Affairs: Observations on Selected Features of the Proposed Veterans’ Millennium Health Care Act. GAO/T-HEHS-99-125. Washington, D.C.: May 19, 1999. Veterans’ Affairs: Progress and Challenges in Transforming Health Care. GAO/T-HEHS-99-109. Washington, D.C.: April 15, 1999. VA Health Care: Capital Asset Planning and Budgeting Need Improvement. GAO/T-HEHS-99-83. Washington, D.C.: March 10, 1999. Veterans’ Benefits Claims: Further Improvements Needed in Claims- Processing Accuracy. GAO/HEHS-99-35. Washington, D.C.: March 1, 1999. VA Health Care: Closing a Chicago Hospital Would Save Millions and Enhance Access to Services. GAO/HEHS-98-64. Washington, D.C.: April 16, 1998. VA Hospitals: Issues and Challenges for the Future. GAO/HEHS-98-32. Washington, D.C.: April 30, 1998. VA Health Care: Status of Efforts to Improve Efficiency and Access. GAO/HEHS-98-48. Washington, D.C.: February 6, 1998. VA Disability Compensation: Disability Ratings May Not Reflect Veterans’ Economic Losses. GAO/HEHS-97-9. Washington, D.C.: January 7, 1997. VA Health Care: Issues Affecting Eligibility Reform Efforts. GAO/HEHS- 96-160. Washington, D.C.: September 11, 1996. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | In previous GAO reports and testimonies on the Department of Veterans Affairs (VA), and in its ongoing reviews, GAO identified major management challenges related to enhancing access to health care, improving the efficiency of health care delivery, and improving the effectiveness of disability programs. This testimony underscores the importance of continuing to make progress in addressing these challenges and ultimately overcoming them. VA has taken actions to address key challenges in its health care and disability programs. However, growing demand for health care and a potentially larger and more complex disability workload may make VA's challenges in these areas more complex. Enhancing access to health care: VA is challenged to deliver timely, convenient health care to its enrolled veteran population. Too many veterans continue to travel too far and wait too long for care. However, shifting care closer to where veterans live is complicated by stakeholder interests. In addition, VA's efforts to reduce waiting times may be complicated by an anticipated short-term surge in demand for specialty outpatient care. VA also faces difficult challenges in providing equitable access to nursing home care services to a growing elderly veteran population. Improving the efficiency of health care delivery: VA is challenged to find more efficient ways to meet veterans' demand for health care. VA operates a large portfolio of aged buildings that is not well aligned to efficiently meet veterans' needs. As a result, VA faces difficult realignment decisions involving capital investments, consolidations, closures, and contracting with local providers. VA also faces challenges in implementing management changes to improve the efficiency of patient support services, such as food and laundry services. Improving the effectiveness of disability programs: VA is challenged to find more effective ways to compensate veterans with disabilities. VA's outdated disability determination process does not reflect a current view of the relationship between impairments and work capacity. Advances in medicine and technology have allowed some individuals with disabilities to live more independently and work more effectively. VA also faces continuing challenges to improve the timeliness, quality and consistency of claims processing. Major improvements may require fundamental program changes. GAO designated federal real property, including VA health care infrastructure, and federal disability programs, including VA disability benefits, as high-risk areas in January 2003. GAO did this to draw attention to the need for broad-based transformation in these areas, which is critical to improving the government's performance and ensuring accountability within expected resource limits. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
DOD and the military services classify TWVs by weight or payload capacity into three categories—light, medium, and heavy—although the definitions of each class vary among the services. Each class generally includes multiple variants or models built on a common chassis. For example, the Army’s FMTV consists of 2.5- and 5-ton capacity trucks, each with the same chassis and includes cargo, tractor, van, wrecker, and dump truck variants. Table 1 lists the TWVs acquired by the military services over five fiscal years, fiscal years 2007 through 2011. Requirements for TWVs have evolved over the last decade, in part, due to the operational threats encountered in Afghanistan and Iraq. TWVs were traditionally viewed as utility vehicles that required little armor because the vehicles operated behind the front lines. However, the tactics used against forces in these countries dictated that vehicles needed more protection. For example, the HMMWV was conceived and designed to support operations in relatively benign environments behind the front line, but it proved to be highly vulnerable to attacks from improvised explosive devices, rocket-propelled grenades, and small arms fire when it was required to operate in urban environments. As a result, DOD identified an urgent operational need for armored tactical vehicles to increase crew protection and mobility of soldiers. Although the initial solution—the Up- Armored HMMWV—provided greater protection, the enemy responded by increasing the size, explosive force, and type of improvised explosive devices, which were capable of penetrating even the most heavily armored vehicles. Consequently, the Mine Resistant Ambush Protected (MRAP) vehicle was approved in 2007 as a rapid acquisition capability. DOD recognized that no single manufacturer could provide all of the vehicles needed to meet requirements quickly enough, so it awarded contracts to multiple manufacturers. The AECA authorizes the President to control the export of arms, such as TWVs. The authority to promulgate regulations on these items has been delegated to the Secretary of State. State administers arms transfer controls through the International Traffic in Arms Regulations and designates, with the concurrence of DOD, the articles and services deemed to be arms. These arms constitute the United States Munitions List (USML). DOD’s TWVs are generally designated as Category VII (Tanks and Military Vehicles) items on the USML. Arms, including TWVs, can be sold and exported to foreign governments through the FMS program or DCS. Under the FMS program, the U.S. government procures items on behalf of eligible foreign governments using the same acquisition process used for its own military needs. While State has overall regulatory responsibility for the FMS program and approves such sales of arms to eligible foreign governments, DOD’s Defense Security Cooperation Agency administers the program. Alternatively, the DCS process allows foreign governments to directly negotiate with and purchase arms from U.S. manufacturers. For TWVs controlled on the USML, manufacturers must generally apply for an export license to State’s Directorate of Defense Trade Controls, which authorizes the export of arms to foreign governments. State officials assess all arms export requests through the FMS program and DCS license applications against 12 criteria specified in the Conventional Arms Transfer Policy, as summarized in table 2. DOD officials assess the technical risks of the sensitive or classified electronic equipment associated with the sale of TWVs to foreign governments, including the type of armor, sensors or weapons attached to the vehicle, and any signature information. Aside from these technologies, State and DOD officials said the departments generally consider the technology associated with TWVs comparable to commercially available trucks and do not have any additional policies pertaining to the sale of TWVs to foreign governments. In accordance with the AECA, recipient countries of arms, including TWVs, must generally agree to a set of U.S. arms transfer conditions, regardless if sold through the FMS program or DCS. The conditions include agreeing to use the items only for intended purposes without modification, not to transfer possession to anyone not an agent of the recipient country without prior written consent of the U.S. government, and to maintain the security of any defense article with substantially the same degree of protection afforded to it by the U.S. government. To ensure compliance with these conditions, recipient countries must permit observation and review by U.S. government representatives on the use and possession of U.S. TWVs and other arms. While the majority of TWVs that DOD purchases are regulated on the USML, a small number that lack armor, weapons, or equipment that would allow armor or weapons to be mounted are considered to be dual- use items—having both commercial and military applications. These items are controlled under the Export Administration Act of 1979, which established Commerce’s authority to control these items through its Export Administration Regulations and Commerce Control List. On the Commerce Control List, DOD’s TWVs are generally designated as Category 9 (Propulsion Systems, Space Vehicles, and Related Equipment) items. For DCS of such items, U.S. manufacturers must comply with the Export Administration Regulations to determine if an export license from the Commerce’s Bureau of Industry and Security is required. The U.S. TWV industrial base includes seven vehicle manufacturers, over 90 major subsystem suppliers, and potentially thousands of parts and component suppliers. Four of the seven manufacturers provided approximately 92 percent of all TWVs purchased by DOD in fiscal years 2007 through 2011. Figure 1 identifies the manufacturers, the vehicles they produced, and the percent of all vehicles purchased by DOD from each manufacturer in fiscal years 2007 through 2011. Although these manufacturers produced 11 different families of TWVs, which included over 50 vehicle variants, they generally relied on common suppliers for major subsystem components. For example, the manufacturers relied on six or fewer suppliers to provide components, such as engines or tires. In contrast, the manufacturers relied on more than 25 armor suppliers, in part, because there was a shortage of vehicle armor during initial MRAP production. DOD reported that the requirements for armor, in response to the conflicts in Iraq and Afghanistan, provided an opportunity for several suppliers to begin producing armor, which eventually resolved the armor shortage. In addition to these suppliers, manufacturers we met with reported there were potentially thousands of other companies that produced parts for these vehicles. See figure 2 for more information on the number of suppliers that produced major subsystems on DOD’s TWVs. DOD purchased over 158,000 TWVs in fiscal years 2007 through 2011 but plans to buy significantly less from now through fiscal year 2017. DOD demands for TWVs increased dramatically in response to the operational demands and threats experienced by U.S. forces during Operation Enduring Freedom and Operation Iraqi Freedom. For example, between fiscal years 1998 through 2001, before these two wars began, Army budget documents indicate plans to purchase approximately 5,000 HMMWVs. After the start of Operation Enduring Freedom, Army budget documents in 2003 reflected an increased requirement for HMMWVs and, at the time, it planned to purchase approximately 23,000 though fiscal year 2009. However, after Operation Iraqi Freedom began, the need for HMMWVs increased further and the Army reported that it ultimately purchased approximately 64,000 between 2003 through 2009. As U.S. forces began to draw down from the conflicts in Iraq and Afghanistan, DOD’s operational requirements for TWVs declined. For example, while DOD bought over 100,000 TWVs in fiscal years 2007 and 2008, DOD plans to purchase less than 1,000 TWVs in fiscal years 2015 and 2016. In all, DOD plans to purchase approximately 8,000 TWVs in fiscal years 2012 through 2017, as shown in figure 3. Future defense budgets will likely constrain new vehicle purchases and the size of a fleet the military services will be able to sustain. Army officials told us that it would cost approximately $2.5 billion per year to sustain its current fleet of approximately 260,000 TWVs and meet any new TWV requirements. Officials stated, however, that the Army can no longer afford and does not need such a sized fleet, in part, due to budget cuts and potential force structure changes. The Army is re-evaluating how many TWVs it needs and can afford, which will be outlined in a revised TWV strategy. In developing this revised strategy, Army officials recognize that the Army has a relatively young fleet of TWVs, averaging 9 years of age, many of which will be part of its fleet through 2040. While this revised strategy has not been completed, the Army has already made changes to reduce its TWV costs. For example, in February 2012 the Army reduced the number of FMTVs it planned to purchase by approximately 7,400 vehicles. At that time, the Army also terminated a HMMWV modernization effort, known as the Modernized Expanded Capability Vehicle, which was intended to improve vehicle performance and crew protection on over 5,700 HMMWVs. Officials stated that this effort was terminated, in part, because of DOD-wide funding constraints. Army officials estimate that these actions will result in a total savings of approximately $2.7 billion in fiscal years 2013 through 2017. Furthermore, Army officials stated that the Army plans to reduce the size of its TWV fleet to match force structure requirements. They also stated that, as of July 2012, the Army plans to reduce its total fleet by over 42,000 vehicles. Officials added that more vehicles could be divested depending on any future force structure changes and budget constraints. Despite budget constraints, the industrial base will have some opportunities over the next several years to produce a new TWV for DOD. The Joint Light Tactical Vehicle (JLTV) is a new DOD program, designed to fill the gap between the HMMWV and MRAP by providing near-MRAP level protection while maintaining all-terrain mobility. As we previously reported, the Army and Marine Corps are pursuing a revised developmental approach for JLTV and awarded technology development contracts to three industry teams. The program completed the technology development phase in January 2012. Last month, the Army awarded three contracts for the JLTV’s engineering and manufacturing development phase. While production contracts will not be awarded for some time, DOD reports that it plans to purchase approximately 55,000 JLTVs over a 25-year period with full rate production beginning in fiscal year 2018. With production of other TWVs for DOD largely coming to an end in fiscal year 2014, DOD considers the JLTV program to be critical in maintaining an industrial base to supply TWVs to the military. In addition to new production, the Army and Marine Corps also plan to invest in sustainment efforts that could be completed by the U.S. TWV industrial base. These efforts include restoring or enhancing the combat capability of vehicles that were destroyed or damaged due to combat operations. For example, Marine Corps officials reported that it plans to recapitalize approximately 8,000 HMMWVs beginning in fiscal year 2013. In addition, the Army is in the process of resetting the portion of its FMTV fleet that was deployed through at least fiscal year 2017 as well as recapitalizing some of its heavy TWVs. Despite the significant decrease in DOD TWV purchases, the four manufacturers we met with generally reported that these sales remain an important part of their revenue stream. However, there is a wide range in the degree to which the manufacturers were reliant on DOD in a given year. For example, as shown in table 3, one manufacturer reported that for 2007 its revenue from sales to DOD accounted for 4 percent of its total revenue while another manufacturer reported such revenue was as high as 88 percent, with the other two manufacturers falling within that range. Among the four manufacturers, the extent of reliance on revenue from DOD sales varied, in part, because of vehicles sold in the commercial truck and automotive sectors. Aside from producing TWVs, manufacturers produced or assembled commercial vehicles, such as wreckers, fire trucks, school buses, and handicap-accessible taxis, as well as vehicle components, such as engines, transmissions, and suspensions. According to the four manufacturers, their suppliers of TWV major subsystem components generally produced items in the commercial automotive and truck industries. For example, according to manufacturers, suppliers generally produced parts, such as engines, transmissions, axles, and tires for their commercial vehicles in addition to supplying parts for the TWVs they produce. However, vehicle armor, a major TWV component, is primarily a defense-unique item and those suppliers were not typically used in the manufacturers’ commercial vehicles. DOD currently has several studies under way to better understand the U.S. TWV industrial base, its capabilities, and how declining DOD sales may affect it. In 2011, DOD’s Office of Manufacturing and Industrial Base Policy began a multifaceted review of the U.S. TWV industrial base that includes surveying suppliers, conducting site visits, and paneling experts. The Army’s TACOM Life Cycle Management Command also has ongoing studies, including a review to assess the health of the industrial base and others intended to identify its supplier base and any risks associated with sustaining DOD’s TWV fleet. Some of the goals of these different studies are to better understand how different vehicle supply chains affect others, identify single point failures in the supply chain, and provide DOD leadership with improved information so they may better tailor future acquisition policies. U.S. manufacturers sold relatively few TWVs for use by foreign governments in fiscal years 2007 through 2011, when compared to the 158,000 vehicles sold to DOD over that same period. However, most of the manufacturers we met with stated that while sales of TWVs to foreign governments have not equaled those sold to DOD, such sales are becoming an increasingly important source of revenue as DOD purchases fewer vehicles. According to data provided by DOD and the four manufacturers, foreign governments purchased approximately 28,000 TWVs, either through the FMS program or through DCS, in fiscal years 2007 through 2011. In addition to these sales to foreign governments, manufacturers reported they exported approximately 5,000 other TWVs that were different vehicles than those DOD purchased during that time period. Nearly all TWVs sold to foreign governments were sold through the FMS program rather than through DCS. DOD reports that about 27,000 TWVs were sold through the FMS program, while the four manufacturers we met with reported that about 700 vehicles were sold through DCS in fiscal years 2007 through 2011.See figure 4 for a comparison of TWVs sold to DOD and to foreign governments through the FMS program and DCS in fiscal years 2007 through 2011. Approximately 95 percent of TWVs purchased through the FMS program from fiscal year 2007 through 2011 were paid for using U.S. government funding through different security and military assistance programs. The U.S. Congress authorizes and appropriates funds for assistance programs that support activities, such as security, economic, and governance assistance in foreign countries. Examples of such assistance programs include the Afghanistan Security Forces Fund and Iraq Security Assistance Fund, which were sources of funding for TWVs purchased for Afghanistan and Iraq through the FMS program. While Afghanistan and Iraq were the largest recipients of U.S. manufactured TWVs through such assistance programs, DOD officials informed us that as the war efforts conclude there, U.S. funding for TWVs for these two countries’ security forces has declined and is not planned to continue. In addition, a smaller number of TWVs were sold through the FMS program to countries using their own funds. Figure 5 identifies the countries that purchased the most U.S. manufactured TWVs with U.S. or their own funds through the FMS program. U.S. manufacturers of TWVs and foreign government officials we met with identified a number of interrelated factors that they perceive as affecting whether a foreign government decides to purchase U.S. manufactured TWVs. These included potential future competition from transfers of excess (used) U.S. military TWVs, competition from foreign manufacturers, and differing foreign requirements for TWVs. In addition, these U.S. manufacturers and foreign government officials expressed mixed views on the effect the U.S. arms transfer control regimes may have on foreign governments’ decisions to buy U.S. vehicles. These officials said that processing delays and end-use restrictions can influence foreign governments’ decisions to buy U.S. TWVs. Despite these issues, foreign government officials said the U.S. arms transfer control regimes would not adversely affect their decisions to purchase a U.S.-manufactured TWV that best meets their governments’ requirements. The U.S. manufacturers we met with regard the Army’s intent to reduce its TWV fleet size as a risk to their future sales of TWVs to foreign governments. Army officials said it is still assessing its TWV requirements and potential plans to divest over 42,000 vehicles, but they acknowledge that a number of these TWVs could be transferred through the FMS program. The four U.S. manufacturers consider these used vehicles to be a risk to their future sales of U.S. TWVs to foreign governments because foreign governments could be less likely to purchase new vehicles from U.S. manufacturers if the U.S. Army transfers these used vehicles through foreign assistance programs. U.S. manufacturers told us they would like more involvement in DOD’s decisions on its plans for these divested vehicles so they may provide input on potential effects on the industrial base. Commerce’s Bureau of Industry and Security reviews proposed FMS of divested items to identify effects on the relevant industry. During this review, Commerce provides industry with the opportunity to identify any impacts of the potential FMS on marketing or ongoing sales to the recipient country. When approving these transfers, State and Defense Security Cooperation Agency officials said the U.S. government must also weigh national security and foreign policy concerns, which could outweigh industrial base concerns with transfers of used DOD TWVs to foreign countries. While concerned about the potential for competition from the FMS of these retired vehicles, U.S. manufacturers also view these planned divestitures as a potential to provide repair or upgrade business that could help sustain their production capabilities during a period of low DOD demand. Some manufacturers we met with stated that they would like to purchase DOD’s used TWVs, before they are made available to foreign governments, so they may repair or upgrade them and then sell them to foreign governments. DOD is currently reviewing its policies to determine which vehicles, if any, could be sold back to manufacturers. Another manufacturer, while not interested in purchasing the vehicles, expressed interest in providing repair or upgrade services on the used TWVs before they are sold to foreign governments. Defense Security Cooperation Agency officials stated that excess defense articles, such as the used TWVs, are generally made available to foreign governments in “as is” condition and recipient countries are responsible for the cost of any repairs or upgrades they may want to make. They added that in such instances, it could be possible for U.S. manufacturers to perform such services, but it would be at the direction of the purchasing country, not the U.S. government. Foreign government and manufacturer officials that we interviewed identified a number of TWV manufacturers that compete with U.S. manufacturers for international sales. Examples of foreign manufacturers are shown in table 4. Officials from two countries that had not purchased U.S. manufactured TWVs explained that their countries have a well established automotive industrial base capable of producing TWVs that meet their governments’ needs. While all of the foreign officials we interviewed reported that their countries had no policies that favor their domestic manufacturers, governments that have not purchased U.S. TWVs generally purchased vehicles from domestic manufacturers. For example, foreign officials from one country said that all of their government’s TWVs are assembled within its borders. While all of the competitors to U.S. TWV manufacturers are not headquartered in the purchasing countries, foreign officials reported that many of these companies have established dealer and supplier networks within their countries. Foreign officials reported that these domestic dealer and supplier networks make vehicle sustainment less expensive and more manageable, in part, because it is easier and quicker to obtain replacement parts or have vehicles repaired. In contrast, foreign officials said that U.S. TWV manufacturers do not generally have the same dealer and supplier networks within their countries. They added that this can make maintenance of the U.S. vehicles more expensive, in part, due to the added cost of shipping. In addition to the number of TWV manufacturer competitors, foreign officials also reported that there is limited foreign demand for TWVs. Foreign officials reported that their governments purchase relatively few TWVs compared to the U.S. government, in part, because their fleet size requirements are much smaller. Foreign officials we interviewed reported TWV fleets that ranged in size from 2 to 9 percent the size of the U.S. Army’s fleet. For example, foreign officials from one country stated that their military was in the process of upgrading its entire fleet of approximately 7,500 vehicles, which is less than 3 percent of the size of the U.S. Army’s TWV fleet. Foreign government officials also explained that U.S. manufacturers can generally produce TWVs to meet their governments’ requirements, but the vehicles U.S. TWV manufacturers are producing for DOD do not necessarily align with these requirements. Foreign government officials identified the following areas where their governments’ requirements differ from those of DOD: DOD’s TWVs are generally larger than what their government can support. For example, officials from one foreign government reported that its military considered purchasing U.S. manufactured MRAP vehicles but did not have the cargo planes required to transport a vehicle the size and weight of DOD’s MRAP vehicles. Instead, according to the official, this country purchased a mine and ambush protected vehicle developed by one of its domestic manufacturers that is smaller and lighter than the DOD’s MRAP vehicles and better aligned with its transportation capabilities. Their governments do not always require the same level of capabilities afforded by DOD’s TWVs and, in some cases, requirements may be met by commercially available vehicles. For example, foreign government officials identified a number of vehicles in their governments’ tactical fleets that are based on commercial products from automobile companies such as Jeep and Land Rover. Their governments have different automotive or design standards for military vehicles that do not always align with those produced for DOD by U.S. manufacturers. For example, officials from one country said that their military is required to purchase right-side drive vehicles, which are not always supported by U.S. manufacturers. While their military can obtain a waiver to purchase a left-side drive vehicle, this presents training challenges as the majority of the vehicles in its fleet are right-side drive vehicles. Foreign officials said that while U.S. manufacturers are capable of meeting these requirements, foreign competitors may be more familiar with these requirements. Manufacturers that we interviewed said they produce or are developing TWVs to better meet foreign customers’ requirements. For example, one U.S. manufacturer said it was developing a right-side drive variant of one of its vehicles and another manufacturer said that it has a line of TWVs for its international customers that better meets those requirements. U.S. manufacturers and foreign officials expressed mixed views on the effect the U.S. arms transfer control regimes may have on the sale of U.S.-manufactured TWVs to foreign customers. Officials we met with reported that, generally, the U.S. arms transfer control regimes do not inhibit foreign governments from purchasing U.S. manufactured TWVs. Accordingly, we found that once the FMS and DCS process was initiated, no eligible foreign sales or licenses for U.S. TWVs were denied. For example, State officials reported that no countries eligible to participate in the FMS program were denied requests to purchase TWVs in fiscal years 2007 and 2011. Similarly, State DCS license data indicated that no licenses for vehicle purchases were denied from fiscal years 2008 through 2011. While sales of TWVs to foreign governments are generally approved by the U.S. government once initiated, U.S. manufacturers and foreign officials said that foreign governments may prefer to purchase vehicle manufactured outside the United States, in part, due to the amount of time to process sales and licenses requests and end-use restrictions associated with the U.S. arms transfer control regimes. Specifically, manufacturers said the congressional notification process can result in lengthy delays during the FMS and DCS approval process. The AECA requires notification to Congress between 15 and 45 days in advance of its intent to approve certain DCS licenses or FMS agreements. Preceding the submission of this required statutory notification to the U.S. Congress, State provides Congress with an informal review period that does not have a fixed time period for action. One manufacturer stated that this informal review period, in one case, lasted over a year and, after which, the prospective customer decided to not continue with the purchase. Another manufacturer said that the informal congressional notification process is unpredictable because there is no set time limit for review, making it difficult for the manufacturer to meet delivery commitments to foreign customers. State officials acknowledged that the informal congressional notification period can delay the DCS and FMS process because there is no designated time limit for review. According to State officials, the department established a new tiered review process in early 2012 to address this issue by establishing a time bounded informal review period that is based on the recipient country’s relationship with the U.S. government. The formal notification period remains unchanged. Foreign officials said when TWVs that meet their governments’ requirements are available from manufacturers outside the United States, AECA restrictions on third party transfers and end-use administrative requirements associated with U.S. manufactured vehicles could affect their governments purchasing decisions. Foreign officials explained that there are a number of TWV manufacturers outside the United States that can meet their requirements and vehicles sold by those manufacturers do not necessarily come with the same end-use restrictions as U.S. vehicles. For example, the AECA restricts the transfer of arms, including U.S. manufactured, TWVs to a third party without consent of the U.S. government. Some foreign officials said their governments prefer to use private companies, when possible, to make repairs and maintain its TWV fleet because it can reduce costs compared to government repair work. These foreign officials said that U.S. third party transfer restrictions require that their governments obtain permission from the U.S. government before transferring a U.S. TWV to a private company for repairs, which creates an administrative burden. Additionally, foreign governments are required to maintain information on U.S. TWVs’ end-use and possession that must be available to U.S. officials when requested to ensure compliance with U.S. end-use regulations. Foreign officials from one country said the maintenance of this information is an administrative burden and will be more difficult to manage as their government tries to reduce its workforce in a limited budget environment. Foreign officials said that TWVs purchased from manufacturers outside of the Untied States are not generally encumbered with these same restrictions and administrative burdens, making maintenance of these vehicles easier and cheaper, in some cases. State officials acknowledged these concerns from foreign governments but said these restrictions play an important role in protecting U.S. national security interests. Foreign officials reported, however, that the U.S. arms transfer control regimes would not adversely affect their decision to purchase a U.S. vehicle that best meets their governments’ requirements in terms of capabilities and cost. Foreign officials said that U.S. manufacturers make vehicles that are reliable and highly capable. When their governments have requirements that align with those associated with U.S. manufactured vehicles, foreign officials said that the U.S. arms transfer control regimes would not be a factor in their governments’ decisions to purchase the vehicles. Foreign officials that we interviewed also said their governments are experienced buyers of U.S. arms and are able to successfully navigate the FMS and DCS processes and U.S. end-use restrictions to obtain the military equipment they require. The volume of TWVs DOD purchased to meet operational requirements in Iraq and Afghanistan was unique due to specific threats. Many of these vehicles are no longer needed and DOD’s need for new TWVs is expected to decline in coming years. Further, given the current budgetary environment, DOD cannot afford to support the size of its current fleet or buy as many vehicles as it once did. Though U.S. manufacturers increased their production to meet those past needs, they will be challenged in responding to the sharp decline in DOD’s TWV requirements in future years. As DOD continues its studies of the U.S. TWV industrial base, it may be better positioned to address these challenges and how DOD can mitigate any risks to sustaining its TWV fleet. It is unlikely that sales to foreign governments will ever offset declines in sales to DOD, but foreign sales may be more important to the industrial base now more than ever. U.S. manufacturers, however, are presented with a number of factors that affect their ability to sell TWVs to foreign governments. While no foreign officials indicated that their governments would not buy U.S. TWVs, there has been relatively limited demand for the vehicles U.S. manufacturers have produced for DOD. Further, there are many foreign manufacturers that can supply vehicles that meet foreign governments’ requirements. Each of the U.S. manufacturers we met with was either selling or developing alternative vehicles that better meet foreign governments’ requirements, but the extent to which those efforts will stimulate additional sales has yet to be seen. Further, U.S. manufacturers raised concerns that their competitors could eventually include the U.S. military as it makes plans to divest itself of used TWVs that it could make available to foreign governments at reduced costs or for free. Additionally, while U.S. manufacturers perceived the U.S. arms transfer control regimes to be more burdensome than those of other countries, the regimes are not a determining factor when foreign governments seek to purchase TWVs. We provided a draft of this report to DOD, State, and Commerce, as well as the four manufacturers and five foreign governments with whom we met, for their review and comment. DOD and State provide technical comments and two of the manufacturers provided clarifications, which we incorporated into the report as appropriate. Commerce, two manufacturers, and the five foreign governments informed us that they had no comments. We are sending copies of this report to the Secretary of Defense; the Secretaries of the Army and the Navy; the Secretary of State; Secretary of Commerce; and the four manufacturers and five foreign governments with whom we met. In addition, the report also is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix I. In addition to the contact name above, the following staff members made key contributions to this report: Johana R. Ayers, Assistant Director; Patrick Dudley; Dayna Foster; Beth Reed Fritts; Justin Jaynes; Julia Kennon; Roxanna Sun; Robert Swierczek; Bradley Terry; Brian Tittle; and Alyssa Weir. | DODs need for TWVs dramatically increased in response to operational demands and threats experienced in Afghanistan and Iraq. TWVs primarily transport cargo and personnel in the field and include the High Mobility Multi-purpose Wheeled and Mine Resistant Ambush Protected vehicles. The U.S. TWV industrial base, which includes manufacturers and suppliers of major subsystems, increased production to meet DODs wartime requirements. That base now faces uncertainties as DODs budget declines and operational requirements for these vehicles decrease. In addition to sales to DOD, U.S. manufacturers sell vehicles to foreign governments. The Senate Armed Services Committee Report on a bill for the National Defense Authorization Act for Fiscal Year 2012 directed GAO to (1) describe the composition of the U.S. TWV industrial base, (2) determine how many U.S. manufactured TWVs were purchased by foreign governments from fiscal years 2007 through 2011, and (3) identify factors perceived as affecting foreign governments decisions to purchase these vehicles. GAO analyzed data from DOD on U.S. and foreign government TWV purchases, as well as sales data from the four primary U.S. TWV manufacturers. GAO also collected data from five foreign governments, including those that did and did not purchase U.S. TWVs. The U.S. tactical wheeled vehicle (TWV) industrial base includes seven manufacturers that utilize common suppliers of major subsystems, such as engines and armor. Four of these manufacturers reported that their reliance on sales to the Department of Defense (DOD) varies, in part, as they also produce commercial vehicles or parts. Collectively, the seven manufacturers supplied DOD with over 158,000 TWVs to meet wartime needs from fiscal years 2007 through 2011. DOD, however, plans to return to pre-war purchasing levels, buying about 8,000 TWVs over the next several years, in part, due to fewer requirements. Almost 28,000 U.S.-manufactured TWVs were purchased for use by foreign governments from fiscal years 2007 through 2011. Approximately 92 percent of these vehicles were paid for using U.S. security assistance funds provided to foreign governments. Iraq and Afghanistan were the largest recipients of such assistance, but officials stated that DOD does not plan to continue funding TWV purchases for these countries. While sales to foreign governments are unlikely to offset reductions in DOD purchases, manufacturers reported that foreign sales are becoming an increasingly important part of their revenue stream. Sales of U.S.-manufactured TWVs to foreign governments may be affected by multiple interrelated factors, including the availability of used DOD vehicles for sale, foreign competition, differing vehicle requirements, and concerns associated with U.S. arms transfer control regimes. U.S. manufacturers said sales of used Army TWVs to foreign governments could affect their ability to sell new vehicles. U.S. manufacturers and foreign governments also identified a number of non-U.S. manufacturers that produce TWVs that meet foreign governments requirements, such as right-side drive vehicles. While U.S. manufacturers can produce vehicles that meet these requirements, vehicles they produced for DOD generally have not. Finally, manufacturers and foreign officials had mixed views on how the U.S. arms transfer control regimes may affect foreign governments decisions to purchase U.S. vehicles. U.S. manufacturers and foreign officials expressed concerns with processing times and U.S. end-use restrictions, but foreign officials also said that such concerns have not been a determining factor when purchasing TWVs that meet their requirements. GAO is not making recommendations in this report. DOD, the Department of State, and two manufacturers provided technical or clarifying comments on a report draft that were incorporated as appropriate. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
DOE has taken several steps to implement the ATVM program. First, it set three goals for the program: increase the fuel economy of U.S. passenger vehicles as a whole, advance U.S. automotive technology, and protect taxpayers’ financial interests. In that regard, EISA calls for the program to make loans to provide funding to automobile manufacturers and component suppliers for projects that re-equip, expand, or establish U.S. facilities that are to build more fuel-efficient passenger cars and light-duty trucks. According to DOE, the program’s goals also support the agency’s goals of building a competitive, low-carbon economy by, among other things, funding vehicles that reduce the use of petroleum-derived fuels and accelerating growth in advanced automotive technology manufacturing, and protecting U.S. taxpayers’ financial interests. DOE, in its interim final rule, also set technical, financial, and environmental requirements that vehicle and components manufacturers must meet to qualify to receive a loan under the program. For example, an established vehicle manufacturer—one that was manufacturing vehicles in 2005—must demonstrate that the adjusted average fuel economy of the fleet of vehicles it produced in its most recent model year was at least equal to that of the fleet of vehicles it produced in model year 2005. Similarly, a manufacturer that was not producing vehicles in 2005 must show that its proposed vehicles’ adjusted average fuel economy will at least equal that of established manufacturers for a similar classs of vehicles for model year 2005. For applicants deemed eligible, DOE also uses statutorily based technical criteria to determine which projects are eligible. For example, proposed vehicles must achieve at least 125 percent of the average fuel economy achieved by all manufacturers’ vehicles with substantially similar attributes in 2005. In addition, DOE established criteria for ATVM staff, aided by experts from within and outside DOE, to judge and score the technical and financial merits of applicants and projects deemed eligible, along with policy factors to consider, such as a project’s potential for supporting jobs and whether a project is likely to advance automotive technology. Finally, the Credit Review Board, composed of senior DOE officials, uses the merit scores and other information, including Office of Management and Budget’s approved subsidy cost estimates for projects, to recommend loan decisions to the Secretary of Energy. To date the ATVM program has made about $8.4 billion in loans: $5.9 billion to the Ford Motor Company; $1.4 billion to Nissan North America; $529 million to Fisker Automotive, Inc.; $465 million to Tesla Motors, Inc.; and $50 million to The Vehicle Production Group LLC. About 62 percent of the funds loaned—$5.2 billion—are for projects that largely enhance the technologies of conventional vehicles powered by gasoline-fueled internal combustion engines. These projects include such fuel-saving improvements as adding assisted direct start technology to conventional vehicles, which reduces fuel consumption by shutting off the engine when the vehicle is idling (e.g., while at traffic lights) and automatically re- starting it with direct fuel injection when the driver releases the brake. According to DOE’s analysis, the projects will result in vehicles with improved fuel economy that will contribute in the near term to improving the fuel economy of the passenger vehicles in use in the United States as a whole because the conventional vehicles are to be produced on a large scale relatively quickly and offered at a price that is competitive with other vehicles being offered for sale. DOE used data from the borrowers to estimate the fuel economy in miles per gallon (mpg) of the enhanced conventional vehicles that were considered for ATVM loans. According to our calculations using DOE’s estimates of fuel economy, these projects are expected to result in vehicles with improved fuel economy that exceed both the program’s eligibility requirements and the CAFE targets that will be in place at the time the vehicles are produced —by, on average, 14 and 21 percent, respectively. The remaining 38 percent of the funds loaned—about $3.1 billion— support projects for vehicles and components with newer technologies. Fisker’s loan is for two plug-in hybrid sedan projects—the Karma and the Nina. Tesla’s loan is for an all-electric sedan, the Model S, and Nissan’s loan is for the LEAF, an all-electric vehicle classified by DOE as a small wagon. The Vehicle Production Group’s loan is for a wheelchair-accessible vehicle that will run on compressed natural gas. Finally, a portion of the Ford loan supports projects for manufacturing hybrid and all-electric vehicles. In addition, there are two advanced technology components projects: Nissan’s, to build a manufacturing facility to produce batteries for the LEAF and potentially other vehicles; and Tesla’s, to build a manufacturing facility to produce electric battery packs, electric motors, and electric components for the Tesla Roadster and vehicles from other manufacturers. In contrast to the projects supporting enhancements to conventional vehicles, DOE’s and the borrowers’ analyses indicate that the projects with newer technologies will result in vehicles with far greater fuel economy gains per vehicle but that these vehicles will be sold in smaller volumes, thereby having a less immediate impact on the fuel economy of total U.S. passenger vehicles. According to our calculations using DOE’s fuel economy estimates, the projects for vehicles with newer technologies, like the projects for enhanced conventional vehicles, are expected to result in improved fuel economy that exceeds both the program’s eligibility requirements and CAFE targets—by about 125 percent and about 161 percent respectively. The loans made to date represent about a third of the $25 billion authorized by law, but the program has used 44 percent of the $7.5 billion allocated to pay credit subsidy costs, which is more than was initially anticipated. The $7.5 billion Congress appropriated was based on the Congressional Budget Office’s September 2008 estimated average credit subsidy rate of 30 percent per loan ($7.5 billion divided by $25 billion equals 30 percent). However, the average credit subsidy rate for the $8.4 billion in loans awarded to date is 39 percent—a total of roughly $3.3 billion in credit subsidy costs. At this rate, the $4.2 billion remaining to be used to pay credit subsidy costs will not be sufficient to enable DOE to loan the full $25 billion in loan authority. These higher credit subsidy costs were, in part, a reflection of the risky financial situation of the automotive industry at the time the loans were made. For DOE to make loans that use all of the remaining $16.6 billion in loan authority, the credit subsidy rate for the loans would have to average no more than 25 percent ($4.2 billion divided by $16.6 billion). As a result, the program may be unable to loan the full $25 billion allowed by statute. As of May 9, 2011, DOE reported that 16 projects seeking a total of $9.3 billion in loans—representing $3.5 billion in credit subsidy costs—were under consideration. The ATVM program has set procedures for overseeing the financial and technical performance of borrowers and has begun oversight, but at the time of our February report the agency had not yet engaged engineering expertise for technical oversight as called for by the procedures. To oversee financial performance, staff are to review data submitted by borrowers on their financial health to identify challenges to repaying the loans. Staff also rely on outside auditors to confirm whether funds have been used for allowable expenses. As of February 2011, the auditors had reported instances in which three of the four borrowers did not spend funds as required. According to ATVM officials, these instances were minor—the amounts were small relative to the total value of the loans— and the inappropriate use of funds and the borrowers’ practices have been corrected. The ATVM program’s procedures also specify technical oversight duties, a primary purpose of which is to confirm that borrowers have made sufficient technical progress before the program disburses additional funds. To oversee technical performance, ATVM staff are to analyze information borrowers report on their technical progress and are to use outside engineering expertise to supplement their analysis once borrowers have begun constructing or retrofitting facilities or are performing engineering integration—that is, designing and building vehicle and component production lines. According to our review, several projects needing additional technical oversight are under way but the program, as of February of 2011, had not brought in additional technical oversight expertise to supplement program staffs’ oversight. For example, ATVM officials identified one borrower with projects at a stage requiring heightened technical monitoring; however, ATVM program staff alone had monitored the technical progress of the project. ATVM officials told us that the manufacturer has experience with bringing vehicles from concept to production so additional technical oversight expertise has not been needed, despite the procedures’ calling for it. Further, according to documents we reviewed, at the time of our report, four borrowers—rather than the single one identified by ATVM—had one or more projects that, according to the program’s procedures, had already reached the stage requiring heightened technical monitoring. Because ATVM staff, whose expertise is largely financial rather than technical, had so far provided technical oversight of the loans without the assistance of independent engineering expertise, we found that the program may be at risk of not identifying critical deficiencies as they occur and DOE cannot be adequately assured that the projects will be delivered as agreed. At the time of our report, according to ATVM staff, they were in the process of evaluating one consultant’s proposal to provide engineering expertise and were working with DOE’s Loan Guarantee Program to make that program’s manufacturing consultants available to assist the ATVM program. DOE has not developed sufficient performance measures that would enable it to fully assess whether the ATVM program is achieving its three goals. Principles of good governance indicate that agencies should establish quantifiable performance measures to demonstrate how they intend to achieve their program goals and measure the extent to which they have done so. These performance measures should allow agencies to compare their programs’ actual results with desired results and should be linked to program goals. Although the ATVM program has established performance measures for assessing the performance of ATVM-funded vehicles relative to the performance of similar vehicles in model year 2005, the measures stop short of enabling DOE to fully determine the extent to which it has accomplished its overall goal of improving the fuel economy of all passenger vehicles in use in the United States. The measures stop short because they do not isolate the impact of the program on improving U.S. fuel economy from fuel economy improvements that might have occurred in the absence of the program—by consumers investing in more fuel efficient vehicles not covered by the program in response to high gasoline prices, for example. In addition, the ATVM program lacks performance measures that will enable DOE to assess the extent to which it has achieved the other two goals of the program—advancing automotive technology and protecting taxpayers’ financial interests. In our February 2011 report, to help ensure the effectiveness and accountability of the ATVM program, we recommended that the Secretary of Energy direct the ATVM program to (1) accelerate efforts to engage sufficient engineering expertise to verify that borrowers are delivering projects as agreed and to (2) develop sufficient and quantifiable performance measures for its three goals. DOE’s Loan Programs Executive Director disagreed with the first recommendation, saying that the projects were in the very early stages of engineering integration and such expertise had not yet been needed for monitoring. However, at that time, three of the four loans had projects that had been in engineering integration for at least 10 months, and the fourth loan had at least one project that was under construction. We maintained that DOE needed technical expertise engaged in monitoring the loans so that it could become adequately informed about technical progress of the projects. DOE’s Loan Programs Executive Director also disagreed with the second recommendation. He said that DOE would not create new performance measures for the agency’s three goals, saying that performance measures would expand the program and did not appear to be the intent of Congress. We maintained that by not setting appropriate performance measures for its program goals, DOE was not able to assess its progress in achieving what it set out to do through the program; furthermore, it could not provide Congress with information on whether the program was achieving its goals and warranted continued support. Chairman Bingaman, this concludes my prepared statement. I would be pleased to answer any questions that you, Ranking Member Murkowski, or other Members of the Committee may have at this time. For further information about this testimony, please contact Frank Rusco at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Karla Springer, Assistant Director; Nancy Crothers; Carol Kolarik; Rebecca Makar; Mick Ray; Kiki Theodoropoulous; Barbara Timmerman; and Jeremy Williams made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | In the Energy Independence and Security Act of 2007, Congress mandated higher vehicle fuel economy by model year 2020 and established the Advanced Technology Vehicles Manufacturing (ATVM) loan program in the Department of Energy (DOE). ATVM is to provide up to $25 billion in loans for more fuel-efficient vehicles and components. Congress also provided $7.5 billion to pay the required credit subsidy costs--the government's estimated net long-term cost, in present value terms, of the loans. This testimony is based on GAO's February 2011 report on the ATVM loan program (GAO-11-145). It discusses (1) steps DOE has taken to implement the program, (2) progress in awarding loans, (3) how the program is overseeing the loans, and (4) the extent to which DOE can assess progress toward its goals. DOE has taken several steps to implement the ATVM program. First, it set three program goals: increase the fuel economy of U.S. passenger vehicles as a whole, advance U.S. automotive technology, and protect taxpayers' financial interests. DOE also set technical, financial, and environmental eligibility requirements for applicants. In addition, DOE established criteria for judging the technical and financial merits of applicants and projects deemed eligible, and policy factors to consider, such as a project's potential for supporting jobs. DOE established procedures for ATVM staff, aided by experts from within and outside DOE, to score applicants and projects. Finally, the Credit Review Board, composed of senior DOE officials, uses the scores and other information to recommend loan decisions to the Secretary of Energy. The ATVM program, as of May 2011, had made $8.4 billion in loans that DOE expects to yield fuel economy improvements in the near term along with greater advances, through newer technologies, in years to come. Although the loans represent about a third of the $25 billion authorized by law, the program has used 44 percent of the $7.5 billion allocated to pay credit subsidy costs, which is more than was initially anticipated. These higher credit subsidy costs were, in part, a reflection of the risky financial situation of the automotive industry at the time the loans were made. As a result of the higher credit subsidy costs, the program may be unable to loan the full $25 billion allowed by statute. The ATVM program has set procedures for overseeing the financial and technical performance of borrowers and has begun oversight, but at the time of our February report it had not yet engaged engineering expertise needed for technical oversight as called for by its procedures. To oversee financial performance, staff review data submitted by borrowers on their financial health to identify challenges to repaying the loans. Staff also rely on outside auditors to confirm whether funds have been used for allowable expenses. To oversee technical performance, ATVM staff are to analyze information borrowers report on their technical progress and are to use outside engineering expertise to supplement their analysis, as needed. According to our review, projects needing additional technical oversight are under way, and the ATVM staff lack the engineering expertise called for by the program's procedures for adequately overseeing technical aspects of the projects. However, the program had not yet engaged such expertise. As a result, DOE cannot be adequately assured that the projects will be delivered as agreed. DOE has not developed sufficient performance measures that would enable it to fully assess progress toward achieving its three program goals. For example, DOE has a measure for assessing the fuel economy gains for the vehicles produced under the program, but the measure falls short because it does not account for, among other things, the fuel economy improvements that would have occurred if consumers purchased more fuel-efficient vehicles not covered by the program. Principles of good governance call for performance measures tied to goals as a means of assessing the extent to which goals have been achieved. GAO is making no new recommendations at this time. In the February report, GAO recommended that DOE (1) accelerate efforts to engage engineering expertise and (2) develop sufficient, quantifiable performance measures. DOE disagreed with the recommendations, stating that such expertise had not yet been needed and that performance measures would expand the scope of the program. GAO continues to believe that these recommendations are needed to help ensure that DOE is achieving its goals and is accountable to Congress. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Managed by DHS’s Customs and Border Protection (CBP), SBInet is to strengthen CBP’s ability to detect, identify, classify, track, and respond to illegal breaches at and between ports of entry. CBP’s SBI Program Office is responsible for managing key acquisition functions associated with SBInet, including tracking and overseeing the prime contractor. In September 2006, CBP awarded a 3-year contract to the Boeing Company for SBInet, with three additional 1-year options. As the prime contractor, Boeing is responsible for designing, producing, testing, deploying, and sustaining the system. In September 2009, CBP extended its contract with Boeing for the first option year. CBP is acquiring SBInet incrementally in a series of discrete units of capabilities, referred to as “blocks.” Each block is to deliver one or more system capabilities from a subset of the total system requirements. In August 2008, the DHS Acquisition Review Board decided to delay the initial deployment of Block 1 of SBInet so that fiscal year 2008 funding could be reallocated to complete physical infrastructure projects. In addition, the board directed the SBInet System Program Office (SPO) to deliver a range of program documentation, including an updated Test and Evaluation Master Plan (TEMP), detailed test plans, and a detailed schedule for deploying Block 1 to two initial sites in the Tucson Sector of the southwest border. This resulted in a revised timeline for deploying Block 1, first to the Tucson Border Patrol Station (TUS-1) in April 2009, and then to the Ajo Border Patrol Station (AJO-1) in June 2009. Together, these two deployments are to cover 53 miles of the 1,989-mile-long southern border. However, the SBI Executive Director told us in December 2009 that these and other SBInet scheduled milestones were being reevaluated. As of January 2010, the TUS-1 system is scheduled for government acceptance in September 2010, with AJO-1 acceptance in November 2010. However, this schedule has yet to be approved by CBP. Testing is essential to knowing whether the system meets defined requirements and performs as intended. Effective test management involves, among other things, developing well-defined test plans and procedures to guide test execution. It is intended to identify and resolve system quality and performance problems as early as possible in the system development life cycle. DHS has not effectively managed key aspects of SBInet testing, which has in turn increased the risk that the system will not perform as expected and will take longer and cost more than necessary. While the department’s testing approach appropriately consists of a series of progressively expansive test events, some of which have yet to be completed, test plans and test cases for recently executed test events were not defined in accordance with relevant guidance. For example, none of the plans for tests of system components addressed testing risks and mitigation strategies. Further, SBInet test procedures were generally not executed as written. Specifically, about 70 percent of the procedures for key test events were rewritten extemporaneously during execution because persons conducting the tests determined that the approved procedures were not sufficient or accurate. Moreover, changes to these procedures were not made according to a documented quality assurance process but were instead made based on an undocumented understanding that program officials said they established with the contractor. While some of these changes were relatively minor, others were significant, such as adding requirements or completely rewriting verification steps. The volume and nature of the changes made to the test procedures, in conjunction with the lack of a documented quality assurance process, increases the risk that system problems may not be discovered until later in the sequence of testing. This concern is underscored by a program office letter to the prime contractor stating that changes made to system qualification test procedures appeared to be designed to pass the test instead of being designed to qualify the system. These limitations are due, among other things, to a lack of detailed guidance in the TEMP, the program’s aggressive milestones, schedule, and ambiguities in requirements. Collectively, these limitations increase the likelihood that testing will not discover system issues or demonstrate the system’s ability to perform as intended. The number of new SBInet defects that have been discovered during testing has increased faster than the number that has been fixed. (See figure 1 for the trend in the number of open defects from March 2008 to July 2009.) As we previously reported such an upward trend is indicative of an immature system. Some of the defects found during testing have been significant, prompting the DHS Acquisition Review Board in February 2009 to postpone deployment of Block 1 capabilities to TUS-1 and AJO-1. These defects included the radar circuit breaker frequently tripping when the radar dish rotated beyond its intended limits, COP workstations crashing, and blurry camera images, among others. While program officials have characterized the defects and problems found during development and testing as not being “show stoppers,” they have nevertheless caused delays, extended testing, and required time and effort to fix. Moreover, the SPO and its contractor have continued to find problems that further impact the program’s schedule. For example, the radar problems mentioned previously were addressed by installing a workaround that included a remote ability to reactivate the circuit breaker via software, which alleviated the need to send maintenance workers out to the tower to manually reset the circuit. However, this workaround did not fully resolve the problem, and program officials said that root cause analysis continues on related radar power spikes and unintended acceleration of the radar dish that occasionally render the system inoperable. One factor that has contributed to the time and resources needed to resolve this radar problem, and potentially other problems, is the ability of the prime contractor to effectively determine root causes for defects. According to program officials, including the SBI Executive Director, the contractor’s initial efforts to isolate the cause of the radar problems were flawed and inadequate. Program officials added, however, that they have seen improvements in the contractor’s efforts to resolve technical issues. Along with defects revealed by system testing, Border Patrol operators participating in an April 2009 user assessment identified a number of concerns. During the assessment, operators compared the performance of Block 1 capabilities to those of existing technologies. While Border Patrol agents noted that Block 1 offered functionality above existing technologies, it was not adequate for optimal effectiveness in detecting items of interest along the border. Users also raised concerns about the accuracy of Block 1’s radar, the range of its cameras, and the quality of its video. Officials attributed some of the identified problems to users’ insufficient familiarity with Block 1; however, Border Patrol officials reported that the participating agents had experience with the existing technologies and had received 2 days of training prior to the assessment. The Border Patrol thus maintained that the concerns generated should be considered operationally relevant. Effectively managing identified defects requires a defined process for, among other things, assigning priorities to each defect and ensuring that more severe ones are given priority attention. However, the SPO does not have such a documented approach but instead relies on the prime contractor for doing so. Under this approach, defects were not consistently assigned priorities. Specifically, about 60 percent (or 801 of 1,333) of Block 1 defects identified from March 2008 to July 2009 were not assigned a priority. This is partly attributable to the SPO’s lack of a defined process for prioritizing and managing defects. Officials acknowledge this and stated that they intend to have the contractor prioritize all defects in advance of future test readiness reviews. Until defects are managed on a priority basis, the program office cannot fully understand Block 1’s maturity or its exposure to related risks, nor can it make informed decisions about allocating limited resources to address defects. The SPO does not have its own process for testing the relevance to SBInet of technologies that are maturing or otherwise available from industry or other government entities. Instead, it relies on DHS’s Science and Technology Directorate (S&T), whose mission is to provide technology solutions that assist DHS programs in achieving their missions. To leverage S&T, CBP signed a multiyear Interagency Agreement with the directorate in August 2007. According to this agreement, S&T is to research, develop, assess, test, and report on available and emerging technologies that could be incorporated into the SBInet system. To date, S&T has focused on potential technologies to fill known performance gaps or improve upon already-made technology choices, such as gaps in the radar system’s ability to distinguish true radar hits from false alarms. S&T officials told us that they interact with Department of Defense (DOD) components and research entities to identify DOD systems for SBInet to leverage. In this regard, SPO officials stated that the current SBInet system makes use of DOD technologies, such as common operating picture software and radar systems. Nevertheless, S&T officials added that defense-related technologies are not always a good fit with SBInet, due to operational differences. To improve the planning and execution of future test events and the resolution and disclosure of system problems, we are making the following four recommendations to DHS: ● Revise the SBInet Test and Evaluation Master Plan to include explicit criteria for assessing the quality of test documentation and for analyzing, prioritizing, and resolving defects. ● Ensure that test schedules, plans, cases, and procedures are adequately reviewed and approved consistent with the Test and Evaluation Master Plan. ● Ensure that sufficient time is provided for reviewing and approving test documentation prior to beginning a given test event. ● Triage the full inventory of unresolved problems, including identified user concerns, and periodically report the status of the highest priority defects to Customs and Border Protection and Department of Homeland Security leadership. In written comments on a draft of our report, DHS stated that the report was factually sound, and it agreed with our last three recommendations and agreed with all but one aspect of the first one. DHS also described actions under way or planned to address the recommendations. In closing, I would like to stress how integral effective testing and problem resolution are to successfully acquiring and deploying a large-scale, complex system, like SBInet Block 1. As such, it is important that each phase of Block 1 testing be managed with rigor and discipline. To do less increases the risk that a deployed version of the system will not perform as intended, and will ultimately require costly and time-consuming rework to fix problems found later rather than sooner. Compounding this risk is the unfavorable trend in the number of unresolved system problems, and the lack of visibility into the true magnitude of these problems’ severity. Given that major test events remain to be planned and conducted, which in turn are likely to identify additional system problems, it is important to correct these testing and problem resolution weaknesses. This concludes my prepared statement. I would be pleased to respond to any questions that you or other Members of the Subcommittees may have. For questions about this statement, please contact Randolph C. Hite at (202) 512-3439 or [email protected]. Individuals making key contributions to this testimony include Deborah Davis, Assistant Director; Carl Barden, James Crimmer, Neil Doherty, Lauren Giroux, Nancy Glover, Dan Gordon, Lee McCracken, Sushmita Srikanth, and Jennifer Stavros-Turner. SBInet’s Commitment, Progress, and Acquisition Management. Our objectives are to determine the extent to which DHS has (1) defined the scope of its proposed system solution, (2) developed a reliable schedule for delivering this solution, (3) demonstrated the cost effectiveness of this solution, (4) acquired this solution in accordance with key life cycle management processes, and (5) addressed our recent recommendations. We plan to report our results in April 2010. SBInet’s Contractor Management and Oversight. Our objectives are to determine the extent to which DHS (1) has established and implemented effective controls for managing and overseeing the SBInet prime contractor and (2) is effectively monitoring the prime contractor's progress in meeting cost and schedule expectations. We plan to report our results during the summer of 2010. Security Border Initiative Financial Management Controls Over Contractor Oversight. Our objectives are to determine the extent to which DHS has (1) developed internal control procedures over SBInet contractor invoice processing and contractor compliance with selected key contract terms and conditions and (2) implemented internal control procedures to ensure payments to SBInet’s prime contractor are proper and in compliance with selected key contract terms and conditions. We plan to report our results during the summer of 2010. (310665) This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | This testimony is based on our report "Secure Border Initiative: DHS Needs to Address Testing and Performance Limitations That Place Key Technology Program at Risk." In September 2008, we reported to Congress that important aspects of SBInet were ambiguous and in a continuous state of flux, making it unclear and uncertain what technology capabilities were to be delivered when. In addition, the program did not have an approved integrated master schedule to guide the program's execution, and key milestones continued to slip. This schedule-related risk was exacerbated by the continuous change in and the absence of a clear definition of the approach used to define, develop, acquire, test, and deploy SBInet. Furthermore, different levels of SBInet requirements were not properly aligned, and all requirements had not been properly defined and validated. Also, the program office had not tested the individual system components to be deployed to initial locations, even though the contractor had initiated integration testing of these components with other system components and subsystems, and its test management strategy did not contain, among other things, a clear definition of testing roles and responsibilities; or sufficient detail to effectively guide planning for specific test events, such as milestones and metrics. Accordingly, we made recommendations to address these weaknesses which DHS largely agreed to implement. In light of SBInet's important mission, high cost, and risks, you asked us to conduct a series of four SBInet reviews. This statement and report being released today provide the results for the first of these reviews. Specifically, they address (1) the extent to which SBInet testing has been effectively managed, including identifying the types of tests performed and whether they were well planned and executed; (2) what the results of testing show; and (3) what processes are being used to test and incorporate maturing technologies into SBInet. SBInet testing has not been adequately managed, as illustrated by poorly defined test plans and numerous and extensive last-minute changes to test procedures. Further, testing that has been performed identified a growing number of system performance and quality problems--a trend that is not indicative of a maturing system that is ready for deployment anytime soon. Further, while some of these problems have been significant, the collective magnitude of the problems is not clear because they have not been prioritized, user reactions to the system continue to raise concerns, and key test events remain to be conducted. Collectively, these limitations increase the risk that the system will ultimately not perform as expected and will take longer and cost more than necessary to implement. For DHS to increase its chances of delivering a version of SBInet for operational use, we are recommending that DHS improve the planning and execution of future test events and the resolution and disclosure of system problems. DHS agreed with our recommendations. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
In May 2003, the Army and Boeing entered into an “other transaction agreement” for the system development and demonstration phase of the FCS program. Other transaction agreements are not subject to the Federal Acquisition Regulation (FAR) and this gave the Army considerable flexibility to negotiate the terms and conditions with Boeing as the LSI. The Army’s rationale for using such an agreement was to encourage innovation and to use its wide latitude in tailoring business, organizational, and technical relationships to achieve the program goals. Congress raised concerns over the use of the agreement for the development of a program as large and risky as FCS, and the Secretary of the Army directed that the other transaction agreement be converted to a FAR-based contract. In March 2006, the Army definitized a FAR-based contract with Boeing for the remainder of FCS development. Science Applications International Corporation (SAIC) has a contract with Boeing to provide assistance in performing the LSI functions. All of the work performed from May 2003 through September 2005 is accounted for under the prior other transaction agreement, and all work after September 2005 is included under the new contract. Appendix II of this report provides a brief discussion of the conversion of the FCS contract from an other transaction agreement to a FAR-based contract. The LSI as an entity is intended as a single contractor responsible for developing and integrating the FCS system of systems within a given budget and schedule. Furthermore, the LSI was intended to act throughout the system development and demonstration phase to optimize the FCS capability, maximize competition, ensure interoperability and maintain commonality to reduce life-cycle cost. The Army established a number of key tenets that it wanted to achieve on the FCS program, in partnership with the LSI. They include: create opportunity for best of industry to participate; leverage government technology base to maximum extent; associate ongoing enabling efforts with LSI-led activity; maintain a collaborative environment from design through as a minimum, achieve commonality at subsystem/component level; design/plan for technology integration and insertion; maintain and shape the industrial base for the future; retain competition throughout future force acquisition; have appropriate government involvement in procurement achieve consistent and continuous definition of requirements; maintain and shape government acquisition community; achieve program affordability—balance performance and have a “one team” operating with partnership and teamwork. An LSI creates an additional tier in the management structure of the FCS program that would not appear in an acquisition program for a major individual system. The Army itself does not have a direct contractual relationship with the prime-item developers as it would when buying a single system, but rather works through the LSI. This additional tier serves as a layer of separation between the customer (the Army) and the platform developers (the tier immediately below the LSI). The Army believes that this additional layer is required to bring all of the developers together to a single point of communication and interaction for the Army. The Army, LSI and platform developers are all members of the “one-team” structure of the FCS program. The current contract between the Army and Boeing continues into fiscal year 2015 and the Army intends to begin low-rate initial production in 2013. The Army intends to start full-rate production in 2016. The Army plans to achieve initial operational capability in fiscal year 2015 and full operational capability in fiscal year 2017. The Army intends to continue FCS procurement through fiscal year 2030, eventually equipping 15 brigade combat teams—about one-third of the current active force. Figure 1 shows the schedule of key events for the FCS program. The Army has recently decided to make a number of key changes to the FCS program as it was considering funding plans for the fiscal 2008-2013 period. These changes include eliminating or deferring certain FCS systems, increasing quantities of some systems, reducing quantities of other systems, reducing annual procurement rates, and delaying key program milestone dates. For example, the Army deleted or deferred four systems from the FCS system of systems architecture and delayed the start of initial FCS production by five months. The FCS program’s complexity and aggressive schedule is unprecedented for the Army. As we have reported, the program was not near ready for starting the system development and demonstration phase when it did, primarily because the majority of the needed technologies were immature. The Army not only went forward with FCS, it did so with a planned schedule less than that of a single new system. The Army determined that with its existing acquisition workforce and organizations, it did not have the agility, capability, or capacity to manage the program without an LSI to assist with certain aspects of program management. In using an LSI, the Army also wanted to structure a development contract that would create incentives for the contractor to succeed and profit in development and to increase competition at lower levels in the supplier chain. The sheer scope and complexity of the program was driven by the Army’s desire to concurrently develop and field all systems as an integrated unit. The backbone of this unit is a ubiquitous network through which all systems of FCS will operate and communicate—a first-of-a-kind network that will also have to be developed. FCS represents a huge technological leap in system development and acquisition. Some of the major technical challenges faced in the program include: The 14 major weapon systems or platforms have to be designed and integrated simultaneously and within strict size and weight limitations. At least 46 technologies that are considered critical to achieving critical performance capabilities will need to be matured and integrated into the system of systems. The development, demonstration, and production of as many as 170 complementary systems and associated programs must be synchronized with FCS content and schedule. This will also involve developing about 100 network interfaces so the FCS can be interoperable with other Army and joint forces. An estimated 63 million lines of software code, more than 3 times the amount being developed for the Joint Strike Fighter program. In addition to the complexity of the integration task, the Army also outlined an unprecedented timeline for FCS’s development—about 5½ years—a shorter timeline than typical for a single weapon system development. We have pointed out a third source of risk, in addition to complexity and schedule, namely that the program started before it was ready for system development and demonstration because the majority of its critical technologies were immature. Mature technologies are central to having a sound business case at the start of development. Cognizant of these risks, the Army believed it could achieve its goals through use of an LSI because of the technical expertise and workforce flexibility that a private company could bring to the program. In fact, the Army wanted the LSI to help it define the FCS solution. The Army determined it could not meet the challenges of the FCS scope and accelerated schedule with its workforce alone and with traditional management approaches. Army leadership saw its workforce as stovepiped into organizations having areas of expertise that were not a full match for what FCS needed and not large enough with the right skills to staff several separate program offices. Army leadership did not see its workforce as being well-suited to making the tradeoffs and integration that the FCS program demanded. As an integrated system of systems, defining requirements and designing solutions for FCS would necessitate crossing two sets of organizational lines. The first involves the Army’s traditional warfare communities, such as infantry, armor, artillery, and aviation. In the past, these communities developed their own requirements and their own weapon systems. FCS program officials noted that the Army had little success integrating such separately developed platforms because the individual communities did not coordinate their development efforts; specifically, they did not interface across communities during development to ensure systems were being designed to work with each other. A second organizational line involves the users, who develop requirements, and the developers, who manage the weapon systems. Much of the Army’s previous experience with major integration efforts was problematic because of a lack of coordination between the developers and the users. Each system was developed separately and integrated into the force after the fact. The users’ needs to have multiple systems integrated and working together in the field were thus not sufficiently considered in the development process. The Army believed that an LSI could achieve this more effectively than an Army program office because a contractor had greater flexibility to work across these lines. Capability of the Army’s workforce was also a factor in the decision to use an LSI. The Army’s traditional areas of technical expertise, such as in armored vehicles like tanks, were not sufficient to address all the needed expertise for FCS. FCS performance is controlled to a great extent— estimated at 95 percent—by software. Current estimates put the amount of software needed at 63 million lines—the most ever for a weapon system— much of which will be needed for the information network that is the heart of FCS. The Army did not have sufficient skills in software and networks to manage this effort. Even with the LSI, the relative thinness of the Army’s expertise in these areas is evident in the integrated product teams through which the Army and the LSI jointly manage the program. For example, according to an LSI program official, there are 28 LSI representatives for every Army representative on the team responsible for developing the information network. However, according to data provided by an Army official, on the manned ground vehicles team, where the Army has more expertise, there are only about 15 LSI representatives for every Army representative. Service contractors separate from the LSI are counted among the Army program office representatives. Thus, the actual numbers of individuals who work for the Army directly is lower than these numbers represent. A third factor was workforce capacity. If the FCS platforms were to be developed as separate programs, each platform would need its own workforce, meaning several separate Army program offices, each with a full complement of acquisition and technical staff. The demand for this many people would have been a challenge in light of the decline in its acquisition workforce throughout the 1990s. DOD estimates put the decline of its civilian workforce at 38 percent—much of it in acquisitions—from 1989 through 2002. Hence, the Army would not have the capacity to manage a multi-system effort like FCS with separate program offices and likely would have had to turn to contractors to fully staff the program offices. In addition to the complexity and workforce implications of FCS, the Army saw an opportunity for the LSI to give its best effort in development and to create more competition at lower supplier levels. Army leadership involved with setting up the FCS program believed that traditionally, contractors made much of their profit in production, not in research and development. Thus, the Army reasoned, the contractors are not as motivated by research and development as they are by production. Army leadership believed that by using an LSI that would not necessarily have to be retained for production, the Army could get the best effort from the contractor during the system development and demonstration phase, while at the same time making the effort profitable for the contractor. Army leadership also set up the FCS program and contract in such a way that it would create more competition and have more influence over the selection of suppliers below the LSI. Army leadership noted that traditionally, once the Army hired a prime contractor, that contractor would bring its own supplier chains. The Army was not very involved in the choice of the suppliers. In FCS, the prime contractor—or LSI—is mainly an integrator, and the Army called for the LSI to hold a competition for the next tier of contractors. The Army had veto power over these selections. In addition, the Army directed that the LSI contract with integrators at lower levels in the program and the Army has been involved with these selections. These integrators also hold competitions to select suppliers for those systems. This strategy kept the first tier of contractors (the one-team) from bringing their own supplier chains and pushed competition and Army visibility down lower in the supplier chain. It was also a means for the Army to ensure commonality of key subsystems across FCS platforms. Thus, for example, each of the manned ground vehicles would use the same sensors and engines, rather than the past practice of each vehicle having its own unique set of subsystems. The relationship between the Army and the LSI is complex. On the one hand, the LSI plays the traditional role of developing a product for its customer, the Army, and on the other hand, the LSI also performs certain program management and integration responsibilities for the entire program and has a partner-like relationship with the Army. In forging a close partner-like relationship with the LSI, the Army sought to gain advantages such as maintaining flexibility to deal with shifting priorities. At the same time, this relationship, coupled with the vast scope of FCS and the synonymy of the program with the future Army, poses risks for the Army’s ability to provide independent oversight over the long term. OSD is in a position to provide this oversight, but thus far has largely accepted the program and its changes as defined by the Army, even though it is at wide variance from the best practices embodied in OSD’s own acquisition policies. For the FCS program, Boeing serves as a traditional supplier, developing two software-intensive subsystems for the Army. Specifically, the Boeing unit that is serving as LSI is developing the System of Systems Common Operating Environment (SOSCOE). Additionally, a separate Boeing unit is developing the Warfighter Machine Interface (WMI). Both are critical to the success of FCS and, as noted by one program manager, will affect the FCS systems being developed by other contractors. As part of the original 2003 other transaction agreement to begin system development and demonstration, the LSI was permitted to internally develop SOSCOE rather than contracting that work out to a separate supplier. This make decision was approved by the Army. Referred to in the statement of work as the “information management backbone” for FCS, SOSCOE has been likened to a computer operating system like Microsoft Windows®. All FCS systems will have to interface with this software to function as a single, integrated brigade combat team. Ultimately, the success of the FCS program hinges on the successful development of SOSCOE. Boeing is also developing WMI. This work was awarded as a separate competitive subcontract by Boeing as the LSI to a separate unit within the Boeing Company under the original 2003 other transaction agreement. The software will provide a common interface for the soldiers in the brigade combat unit to receive information. The goal of this software is to provide an integrated presentation of all types of battlefield information. Subsequent to the award of WMI, the current FCS contract for system development and demonstration was definitized with language that provides a process to mitigate potential conflicts of interest on the part of Boeing or SAIC as LSI. Under the predecessor other transaction agreement, an organizational conflict of interest clause required that certain safeguards be put in place when either a Boeing or SAIC unit wanted to compete for a subcontract. Under the current contract, an organizational conflict of interest clause is included that completely prohibits either company from competing for any work at any tier for any proposed subcontract under the FCS contract. So, although an award was made to Boeing under the predecessor other transaction agreement, any further awards are prohibited for the duration of the contract. The government’s relationship with a contractor—regardless if it is an LSI or a more traditional prime contractor–for a major project like a weapon system can range from a distant, arms-length relationship to a close, partner-like relationship. An arms-length relationship is characterized by separation between the government as the customer and the contractor as the supplier or developer. In this arrangement, communications between the government and the contractor are more likely to be periodic and formal. For weapon system programs, these kinds of relationships can often be found in situations in which the government can establish detailed technical specifications for the weapon system, enabling the contractor to design and develop a weapon system to meet the specifications without much government involvement. An arms-length relationship optimizes the independence of the government by minimizing the interaction between its staff and that of the contractor. The downside of this type of relationship is that information can flow slowly between the two parties, and decision-making can be sequential and untimely. For example, if the government were to wait 6 or more months between program reviews, work done in the interim could go astray from the government’s wishes and have to be redone. In a partner-like relationship, the government and the contractor work together on a continual basis to decide what work is to be done. Over the past 10 years, DOD has attempted to employ more partner-like arrangements on its programs. For example, in the 1990s, DOD program offices began employing integrated product teams, which are multidisciplinary teams that have the cross-functional talent from both the government and the contractor to make more informed decisions about a product’s design, production, and support. In addition, DOD has attempted to increase its use of performance-based contracting, in which agencies contract for results rather then processes and leave the determination of how best to achieve the results to the contractor. DOD guidance on performance-based contracting states that a positive relationship between the government and the contractor is essential to that kind of arrangement. For example, the guidance notes that the government and industry should work together as a team to communicate expectations, agree on common goals, and identify and address problems early on to achieve desirable outcomes. Such a partner-like relationship is intended to enable more real-time, better informed decisions, reduce rework, and provide increased flexibility to adjust to new demands. A partner-like relationship can also pose risks for the government. Depending on the closeness of the working relationship, the government can become increasingly vested in the results of shared decisions and runs the risk of being less able to provide oversight compared with an arms- length relationship, especially when the government is disadvantaged in terms of workforce and skills. In the case of FCS, the partner-like relationship between the Army and the LSI breaks new ground and as such these risks are present. More specifically, in FCS the Army is more involved in the selection of subcontractors than we have seen on other programs, which can, over time, make the Army somewhat responsible for the LSI’s subcontracting network. On the other hand, the LSI is more involved with influencing the requirements, defining the solution, and testing that solution than we have seen on other programs. This is not to say that the level of involvement or collaboration between the Army and the LSI is inherently improper, but that it may have unintended consequences for oversight over the long term. The degree of the Army’s collaboration with the LSI in the FCS program and the possible risks this poses can be illustrated in the following areas: Requirements. The Army initially established the operational requirements for FCS. Based on those requirements, the Army and LSI are collaboratively refining the FCS system of systems requirements and system-level requirements (or system specifications). This refinement process has also resulted in changes and clarifications to the FCS operational requirements. The collaboration allows both parties to agree on and refine requirements that they believe are feasible based on system- of-systems requirements analysis conducted by the LSI and its subcontractors. Subsequently, the Army and LSI can reach agreement on what requirements are appropriate to achieve the FCS capability within cost and schedule goals. For example, the Army and LSI recently collaborated on the feasibility of the manned ground vehicle weight requirement. As a result of this collaboration, the Army decided to trade off the original air transport requirement that FCS manned ground vehicles weigh no more than 24 tons because they did not have enough armor to meet the survivability requirement. The Army and LSI again collaborated with the Army ultimately deciding that the requirement for vehicle weight be allowed to grow to as much as 29 tons to provide the needed armor. This change was significant, because the FCS vehicles will now have to be transported by a larger aircraft, the C-17, rather than by the C-130 transporter. Part of the reason for the change was that, according to program officials, an advanced armor being developed by the Army did not prove as effective as expected within desired weight parameters. There are several other key technologies that are still immature, and to the extent they do not perform as expected, requirements could continue to be changed to match what is technically possible. This could help ensure that FCS development can continue but may produce less value in terms of capability for the investment. Subcontract Selections. The Army and the LSI collaborate on subcontract selection decisions in contracting tiers below the prime contractor level. Subcontract selections at these levels have normally been made by the contractors without much government involvement. Army officials participated in the selection process for the one team subcontracts awarded by the LSI to build and integrate major platforms. The Army also plays a role in the selection of lower tier subcontractors. For example, the Army participated in the selection of a subcontractor to build the Active Protection System to protect vehicles from rocket propelled grenades. This is a fourth-tier subcontractor. Although the Army is involved with the selections, the subcontracts are awarded by the LSI or other lower-tier contractors, so traditional government bid protest remedies are not available to the losing contractors, as with any procurement between private entities. To the extent that a subcontractor selected with the Army’s involvement underperforms, the Army may bear some responsibility for the long term consequences of that performance. Test and Evaluation. The LSI has a lead role in developmental testing and verification of technical requirements throughout FCS development. For the FCS program, testing and evaluation of system prototypes will be managed through a combined test organization co-led by the LSI and the Army and made of up representatives of the LSI, Army Test and Evaluation Command and the Army’s FCS program management office. In its role co-leading the test organization, the LSI will coordinate and perform a number of activities to ensure FCS performance is effectively and efficiently achieved. Building and testing prototypes is funded through the LSI contract, and the LSI will recommend how many and what type prototypes will be fabricated. Typically, the Army test command conducts and/or monitors system development tests and conducts operational tests of systems to provide an objective, performance-based evaluation of system capabilities against expectations in weapons programs. Their independent role is an important source of information on how well a program is progressing. In the FCS situation, the Army test command is in the position of relying on the LSI to plan for and conduct sufficient developmental testing—as well as proper corrective actions for identified issues—which is an important precursor to a successful operational test program. This has led to concerns by members of the Army test community about their ability to conduct sufficient independent testing, while having to work so closely with the LSI. It also raises the question of whether the LSI is too involved with testing its own solution. Involvement in Production. According to the FCS program manager, the Army plans to contract with Boeing during fiscal year 2008 for the initial production of FCS capabilities to be spun out to the current forces and for the early production of the FCS non-line-of-sight cannon. The current LSI development contract for the core FCS systems extends almost 2 years beyond the FCS initial production decision. The Army does not expect that the initial brigades outfitted by FCS will meet the upper range of its requirements, and has made the LSI responsible for planning future FCS enhancements in the production phase. The LSI is also responsible for defining and maintaining a FCS growth strategy for integrating new technologies into brigade combat teams. This role keeps the LSI involved in the FCS program in the production phase and could make the LSI indispensable to the Army. OSD is in a position to provide the arms-length oversight that can counterbalance some of the potential risks associated with the Army’s level of involvement with both the FCS program and the LSI. Thus far, OSD has not played an active oversight role but rather has allowed FCS to proceed according to the Army’s plans. It has passed on opportunities to assert its own positions on knowledge-based acquisition and cost estimates. In response to a statutory requirement, OSD has committed itself to a formal decision review of the program following its preliminary design review in 2009. In August 2004, the Institute for Defense Analyses expressed concerns that the collaborative arrangement between the Army and LSI created an inherent tension between the roles of Army participants as both teammates and customer representatives. The Institute expressed the need for a corporate perspective on the FCS program on behalf of the Army, so an independent eye could be put toward cost, schedule and performance issues. This may be a difficult principle for the Army to put into practice. The FCS program is nearly synonymous with the Army’s future forces and necessarily requires the commitment and involvement of Army leadership. FCS represents the bulk of the Army’s investment portfolio. Additionally, the nature of FCS being made up of several programs that are large enough to have been individual acquisitions in and of themselves, reduces the level of granularity of oversight that may have otherwise been exercised over those programs. Major defense acquisition programs have certain reporting requirements under law that provide information to decision-makers about those programs. The programs within FCS are not designated separately from FCS, so the reporting requirements for them are not the same as if they were separately designated. Since FCS generally meets those reporting requirements at the system-of-system level, the granularity of reporting on individual systems within FCS is less defined. OSD can help provide the corporate perspective on FCS through its oversight role. To date, OSD has kept informed of the program and reviews the program annually. However, it held only one corporate-level decision meeting on FCS at which it approved the program to begin despite its being at odds with DOD’s own standards for such program initiation. Although OSD has remained involved in the program, it has thus far largely accepted the program as defined by the Army. Specifically, in May 2003, the Under Secretary of Defense (Acquisition, Technology, and Logistics) approved the FCS program to begin the system development and demonstration phase, referred to as the milestone B decision. It is DOD policy for programs to have mature technologies at that point, and for programs to be evolutionary in nature—that is, an incremental improvement over existing capabilities. FCS was neither, as all of the program’s 49 critical technologies were immature and the program was a revolutionary departure from existing Army capabilities. Instead, the Army is following its own, lower standard for technology maturity— achievement by the critical design review in 2011—over 7 years later than called for by DOD policy. Upon making that decision, the Under Secretary recognized the FCS program’s immaturity and stated that there would be a milestone B update review 18 months later. This was to be a decision-making review for which the Under Secretary had listed several action items that the FCS program had to complete in order to continue. However, this review never occurred and the FCS program continued as originally planned. OSD has not since revisited its decision to approve the program. Since that time, program costs and schedule have roughly doubled. Accordingly, last year, we recommended that OSD hold a decision-level meeting. However, while OSD stated that it would have a Defense Acquisition Board review, it would not commit to making it a milestone decision review. It had not planned another decision meeting until the FCS production decision, referred to as milestone C. This would have been too late to have any material effect on the course of the program, short of cancellation which is extremely rare at that point in a program. Subsequently, Congress intervened and required that OSD hold the formal decision meeting, currently scheduled for 2009. DOD has since proposed a serious approach to making that decision, which is encouraging from an oversight perspective. Recognition and reporting of cost growth is another area in which OSD has deferred to the Army. The Army has recently restructured the FCS program to reduce the number of systems and reduce planned production rates to stay within expected funding levels. This will mark the second restructuring of the program in 4 years, which has seen program investment costs increase from $77.2 billion in constant 2003 dollars to $119.2 billion in 2005 according to Army estimates, and again to at least $150.5 billion in 2006 according to an independent cost estimate. The Army estimates the cost of the recently restructured program to be slightly different than its 2005 estimate. The cost increases that have occurred since 2003 have largely been determined by the Army and OSD to be changes in scope, a distinction that is important for cost reporting purposes. As we have previously reported, DOD has allowed unit cost increases associated with quantity reductions or increases in capabilities to be excluded from a determination of a Nunn-McCurdy breach. DOD refers to these as programmatic adjustments and has concluded that nearly all of FCS’ 76 percent cost increase—based solely on Army estimates—falls in this category. As a result, the Secretary of Defense has not had to carry out an assessment of the program or make a certification to Congress. Such an assessment and certification of FCS would have had value from an oversight perspective. A recent decision not to use an independent cost estimate may have had a similar effect on cost reporting. In May 2006, the OSD Cost Analysis Improvement Group submitted an independent cost estimate that showed its estimate of FCS investment costs to be 24-43 percent higher than the Army estimate prepared by the FCS program office. OSD did not adopt this estimate. While OSD is not obligated to adopt its independent estimates, previous experience has shown these estimates to be more accurate than the typically optimistic service estimates and could have been become an additional factor to consider in a Nunn-McCurdy determination. The Army has structured the FCS contract consistent with its desire to incentivize development efforts and make it financially rewarding for the LSI for making such efforts. In general, contracts are limited in that they cannot guarantee a successful outcome. This is true for the FCS contract, and specific aspects of the contract could make it even more difficult to tie the LSI’s performance to the actual outcomes of the development effort. Key demonstrations of the actual capabilities of FCS systems will take place after the LSI has been able to recoup over 80 percent of its costs and had the opportunity to earn most of its fees. The Army shares responsibility with the LSI for making some key decisions and to some extent the Army’s performance may affect the performance of the LSI. As with many cost-reimbursable research and development contracts, the LSI is responsible to put forth its best effort on the development of the FCS capability. If, given that effort, the FCS capability falls short of needs, the LSI is not responsible and still it is entitled to have its costs reimbursed and may earn its full fee. The current contract for completing FCS’s system development and demonstration phase provides a relatively high level of compensation in terms of total dollars, fee, and price of labor. The definitized contract between the Army and the LSI is a cost-reimbursable contract that is valued at $17.5 billion, comprised of $15.2 billion in cost and up to a 15-percent fee of $2.3 billion. The remaining costs and fees from the earlier other transaction agreement were separated from the current FAR-based contract that was definitized in March 2006. The current contract period, which includes both the remaining work from the other transaction agreement and the definitized action, effectively runs from September 2005 through the first quarter of fiscal year 2015. Under the FCS contract, the LSI is required to put forth its best efforts to ensure a successful system. The Army will reimburse the LSI’s allowable costs and reward the contractor with profit in the form of a fixed and an incentive fee for its efforts. The fixed fee is paid annually and the incentive fee is earned incrementally based on the LSI’s demonstrated achievement of established performance, cost and schedule criteria that are associated with program events. The total fee of 15 percent (which includes the potential incentive fees) is based on the total value of the contract, as estimated at contract inception. However, the benefit to the LSI is very favorable when considering the cost of the work the LSI actually performs, versus the amount that it subcontracts out to other firms. On FCS, the LSI will actually perform about $8.7 billion worth of the work when combining the costs under the previous other transaction agreement and subsequent FAR-based contract. Using that as a base, the potential fee of $2.7 billion roughly amounts to a 30 percent profit on the work the LSI actually does itself. According to an analysis conducted within the Office of the Secretary of Defense, this is a relatively high ratio of profit to value of work performed when compared with other large development programs. As with most cost reimbursable contracts, the reimbursable costs of the prime contractor include its costs and the costs and fees of lower tier subcontractors. The prime’s fee is separate from its reimbursable costs. For example, if a company is awarded a prime development contract for $300 million, that figure includes both costs and fees of the contractor’s subcontracts. The prime contractor’s fees are calculated on the $300 million cost figure included in the contract, but are not allowed to go up if the contract costs increase. Accordingly, if the prime contractor then awards a subcontract for $100 million of costs and pays the subcontractor a fee of $15 million, the full $115 million paid to the subcontractor is part of the $300 million of the prime contractor’s estimated reimbursable costs. The prime contractor is entitled to be reimbursed for the full $300 million in costs from the government plus be paid any fee it has earned. The result of the FCS LSI arrangement is an additional layer of subcontractors and associated costs. Thus, the costs and fees of all the prime item developers and their subcontractors are included in the $15.2 billion in costs reimbursable to the LSI under its contract with the Army. The LSI’s potential $2.3 billion fee is calculated based on these costs, as with a typical prime contract. Based on data provided by FCS program officials, the cost of LSI personnel is high relative to their government counterparts. The Army is paying the average LSI full-time equivalent about 25 percent more than the average cost of a federal employee in the senior executive service. These costs assume salary, benefits, and other costs of maintaining an employee on the program. We have recently reported that contractor personnel also cost the Missile Defense Agency about 25 percent more than their government counterparts. However, the comparison data for missile defense personnel is based on all program personnel, not just the members of the more highly compensated senior executive service. Under the terms of the FCS contract, the LSI can earn over 80 percent of its $2.3 billion fee by the time the program’s critical design review is completed in 2011, and roughly 80 percent of contract costs will have been paid out by the Army by that point. Yet the actual demonstration of individual FCS prototypes and the system-of-systems will take place after the design review. Our work on past weapon system programs shows that most cost growth—symptomatic of problems—occurs after the critical design review. The fee the LSI can earn under the FCS contract is divided between a fixed fee of $1.13 billion that will be paid in annual installments and an incentive fee of $1.14 billion that, according to a program official, can be earned on an incremental basis as the LSI accomplishes certain performance, cost and schedule criteria associated with each of nine key program events. Thus, it can earn portions of its incentive fees prior to occurrence of the event. Typically, incentive fees for weapon acquisition programs are based largely on how well the contractor achieves cost targets, but the LSI is eligible to receive a minimum of 50 percent of its available incentive fee based on performance criteria, not cost. Additionally, the contract provides for rolling over any unearned incentive fees to subsequent events. This means that if work under a fee event is delayed, the Army can decide to delay the associated fee as well and pay it when the work does get done. To the extent that the contractor is responsible for the delay, rollover can allow the contractor to get a second chance to recoup performance fee that it did not perform well enough to earn according to criteria at the original event, but it will not recoup the portion of the fee associated with schedule performance. A high-level program official did tell us that the Army plans to allow roll-over at only one program event, if the LSI does not earn its full fee at that event. Previous GAO work on fees highlighted the use of rollover as an indication that the fee structure for the program lacks the appropriate incentives, transparency, and accountability for an effective pay for performance system. The nine fee events used to evaluate the performance of the LSI, along with the fixed and incentive fees that can be earned are listed in the table below. To date, the LSI has completed one incentive event under the FAR contract and received 100 percent of the available incentive fee for its efforts. By the time the Army completes the critical design review in 2011, the LSI could earn over 80 percent of its incentive fee and over 80 percent of its total fee. The critical design review is important because our work has shown that by this point in time, a weapon system’s design should be stable enough to release 90 percent of engineering drawings for manufacturing. This level of knowledge is demonstrative that the design is stable and capable of meeting performance requirements. It is the point at which managers of a program can determine whether or not to build production-representative prototypes to demonstrate the actual performance of the design. We have found that most cost growth on weapon system development programs occurs after the critical design review. As shown in figure 2, historical information on 26 major programs that have completed development experienced about 28 percent cost growth, with almost 20 percent after critical design review. This pattern of cost growth occurs because most programs hold critical design review before the design is stable. Subsequent building and testing of prototypes has led to the discovery of problems that are costly to fix in the late stages of development. We have already reported that the critical design review for the FCS program will occur before the program has attained a sufficient level of knowledge to ensure that technologies are mature. Moreover, the Army does not plan to build production- representative prototypes for testing, relying instead on less mature prototypes and simulations. This sequence of events sets the stage for much discovery about the FCS’s actual performance and potential problems after the design review and after most of the fee can be paid to the LSI. For several reasons, it will be difficult to connect the LSI’s performance on the contract with the success of the program. The contract itself, like those for other weapon system developments, does not insure the Army against an unsuccessful outcome. While the Army can gauge the progress under the contract, the LSI is responsible for providing best efforts, not successful outcomes. The criteria for fee events are not directly related to achievement of total program outcomes, and the partner-like involvement with the LSI creates a situation in which the Army’s performance can affect the LSI’s performance. The FCS contract is a cost-reimbursement research and development contract. In this respect, it is no different than most contracts to develop weapon systems. Essentially, under a research and development contract, the contractor, or LSI in the case of FCS, is required to provide its best efforts at developing a capability or weapon system to the Army but is not responsible for actually producing the capability. Best efforts are measured by the inputs the contractor puts toward development of the system. Specifically, it must put the resources and processes in place to demonstrate its best efforts at developing the Army’s desired capability. If the weapon systems, individually or collectively, fail to provide that capability, the LSI is not responsible as long as it has put forth best effort. The contract fee events reflect the best effort nature of the LSI’s performance and do not require the successful demonstration of specific program knowledge or outcomes. For example, the criteria for the most recent incentive fee event (which was valued at a total of about $100 million) included such items as an updated force effectiveness analysis, the update and approval of program technical performance measures, and the completion of certain requirements and planning products. However, the incentive fee event criteria do not specify what is expected in terms of the effectiveness analysis results, the current status of the technical performance measures, or when and how the requirements process should be completed. Army program officials point out that this fee structure is meant to create incentives for the LSI to focus on putting processes in place to ensure successful development of the system. They also note that in some past programs, contracts had devoted inadequate resources to such activities. As noted in previous GAO work and in NASA contracting guidance for major system acquisitions, input factors such as those used as criteria for the fee events in FCS are valuable, but they do not provide indications of success relative to the desired end result of the program. Because of its close involvement with the LSI, the Army has to make judgments about what contract outcomes and changes it is responsible for versus the LSI. The Army has already made judgments like these. When the FCS program was restructured in 2004, the cost estimate and the program schedule increased significantly as the Army changed the scope of the program by increasing requirements and adding deferred systems to the contract. The Army attributed the changes in cost and schedule to the changes in scope and took responsibility for them, absolving the LSI of responsibility. Evaluated against the revised cost and schedule estimates, the Army awarded the LSI the full incentive fee at the next program evaluation event. Such adjustments in the LSI’s contractual responsibilities are possible in the future as well because the criteria for each fee event are not set until the year the event occurs and payment of fee associated with each event is done incrementally based on accomplishment of specific criteria for each event. This could allow the Army to adjust the fee criteria based on the status of the program at the time. Thus, if the LSI and Army determine that a certain segment of work due to be completed by the time of an event cannot be completed, the criteria for assessing that segment can be shifted out of that event. This occurred in the most recent program event where the scope of work associated with approximately $105,000 in fee was shifted to the subsequent fee event. The Army and LSI decided this was necessary because accomplishment of the criteria associated with that fee was better suited for the next fiscal year. The Army’s own performance may be a factor in these decisions. For example, the Army is responsible for maturing some of the key technologies the LSI will need to integrate into the FCS systems. If these technologies do not succeed, then the expectations of the LSI may have to be adjusted accordingly. The decision to increase the weight requirement for the manned ground vehicles is illustrative. Part of the reason for the decision was the fact that an advanced, lightweight armor the Army was developing outside the FCS contract was not performing as expected. While the decision affected the vehicle design, the LSI was not responsible for development of the armor technology. Evaluating the use of the LSI on FCS involves consideration of several intertwined factors. Some, like the best efforts provisions of a cost- reimbursable research and development contract, are not unique to the LSI or to FCS. Other factors differ not so much in nature, but in degree from other programs. For example, FCS is not the first system-of-systems program DOD has proposed, but it is arguably the most complex. FCS is not the first program to proceed with immature technologies, but it has more immature technologies than any other program. FCS is not the first program to use an LSI, but the extent of the partner-like relationship between the Army and the LSI breaks new ground. Collectively, they make the LSI arrangement in the FCS context unique. We have reported the great costs and risks DOD has accepted by committing to FCS investments. We have expressed concern that the FCS program moved forward with insufficient knowledge and, therefore, an insufficient business case. However, that aside, if one accepts the FCS program for what it is and where it is in the development cycle, the Army has set up a contractual relationship that is both consistent with its vision for FCS and candid with respect to its workforce limitations. The Army has been thoughtful about what it is trying to accomplish collaboratively with the LSI, and has been working hard to make progress, including facing up to difficult tradeoffs. On the other hand, the limits of the contractual arrangements must also be recognized. Given the unprecedented challenge FCS represents, it is unrealistic to expect that any contracting approach alone could assure a successful outcome. Ultimately, the risks of successful outcomes will be borne by the government. The contractual arrangements are not a substitute for having the high level of knowledge that a sound business case requires. The Army has shown a high tolerance for accepting risk and responsibility on this program. In addition to accepting high technical risk, the Army has accepted responsibility for lowering the performance of some individual systems, deleting some and adding other systems, reducing quantities, and increasing costs and schedules. The Army has determined the bulk of cost and schedule changes since 2003 to be programmatic or scope-related. This determination has had two effects. First, the changes became the responsibility of the Army, entitling the LSI to earn full fee thus far. Second, the changes are excluded from a determination of a Nunn- McCurdy breach and its reporting and certification requirements. Over time, the Army runs the risk of becoming increasingly vested as it makes these and other decisions and less able to change course. Yet, the government must safeguard its ability to change course in the future as demonstrated knowledge replaces projections. The foregoing underscores the important role of OSD in providing oversight on the FCS program and holding the program accountable to its own policies. While the Army works to manage the program, it is important that OSD hold the program accountable to best practice standards embedded in its policies. The go/no-go decision it will hold in 2009 provides an opportunity for OSD to do so. The use of an LSI on FCS also needs to be seen as more significant than a contracting arrangement for a single program. At the very least, a proposal to use an LSI approach on any new program should be seen as a risk at the outset, not because it is conceptually flawed, but because it indicates the government may be pursuing a solution that it does not have the capacity to manage. Such solutions ought not to be accepted as inevitable or unavoidable. Instead, they require additional scrutiny before they are approved and increased oversight if they are approved. We recommend that the Secretary of Defense: reassess OSD’s approach to overseeing the FCS program, including asserting its own policy-based markers for progress, particularly in the areas of cost, technology maturity, design maturity, and production maturity. ensure that there is the best link possible between the fee events in the FCS contract and actual FCS demonstrations; review major FCS program changes to ensure that determinations for the government to accept changes as being programmatic or scope-related in nature are carefully scrutinized; and assess whether the experience of the LSI on FCS has broader implications for acquisition management, such as the ability of the DOD workforce to manage a system-of-systems acquisition. DOD concurred with our recommendations. DOD stated that it was updating its acquisition policy to address markers for progress in a number of areas including cost, technology maturity, design maturity, and production maturity. DOD agreed to use a variety of technical assessments to inform the Defense Acquisition Board on the FCS program’s progress against its policy-based markers. It is important that the Department be as specific as possible and consistent with its own acquisition policy in setting expectations that the FCS program must meet. The Department also agreed to review the FCS award fee plan and to continue scrutinizing FCS program changes and accurately report against the program baseline. DOD noted that the FCS program scope has been expanded to add capability and to meet affordability constraints. In our view, some of the changes in scope were also made to correct shortcomings in the original acquisition strategy. It is important for DOD to be able to make such distinctions for reporting purposes. In concurring that the Secretary of Defense assess whether the experience of the LSI on FCS has broader implications for acquisition management, DOD stated that its acquisition policy is being updated to better manage and control system and system-of-systems acquisitions. In addition to exploring how to improve the management of systems-of-systems and LSIs, it is important for DOD to look at the more strategic questions such as whether and under what circumstances these approaches should be taken. For example, are systems-of-systems too large a scope to manage and report as a single acquisition program? Is using an LSI preferable to getting a better match between the acquisition programs being conceived and the acquisition workforce DOD has to manage them? Should DOD be looking at reducing the scope of programs, increasing the capability of its own workforce, or both to achieve this match? DOD also stated that it considers the business relationship for the FCS development contract to be typical of a prime contract for a major system because the FCS contractor performs a substantial portion of the development work for the program. As there is no universally accepted definition of a LSI, this distinction may be more a matter of opinion than fact. In our opinion, the role played by the FCS LSI is not typical of a DOD contractor. Two characteristics, in our view, distinguish a LSI from a traditional contractor. First, the integrator is managing across what would traditionally have been program lines, versus subsystems within a program. Second, in so managing, the integrator is acting on behalf of, and in the interests of, the government. The Army was specific about needing a different, partner-like contracting arrangement like this when it began the FCS program. We also note that while the FCS LSI is performing substantive work on software systems, its portion of total work is low relative to major prime contractors elsewhere in DOD and it is not directly involved in the development of any hardware for the FCS system-of- systems. Finally, the Department noted that the role of the FCS prime contractor in requirements determination is not correctly framed in our draft report and that we confuse operational requirements with design specifications. We have characterized the LSI’s role in this report as requirements refinement, rather than requirements definition. The requirements work being led by the FCS LSI is intended to complete the definition of the system-of- systems requirements and the system-level requirements. Two aspects of this role are, in our view, distinctive. First is the fact that because FCS is a system-of-systems, the functions performed are one level higher than they would have been for a typical single-system program. Thus, while the Army determines the operational requirements for the FCS brigade combat team the LSI is heavily involved with its subcontractors and the Army in setting the requirements for individual systems. On single system programs, the Army would have set the requirements for the individual system. Second, the FCS solution is being formed concurrent with the development of individual technologies and the design of systems. Thus, as the limitations of technology and design are discovered, the LSI works with the Army to change or refine the requirements to conform to these limitations. While this process is not atypical of weapon system acquisitions, the vast scope and large technical leaps sought in the FCS program requires greater involvement by the LSI in the refinement process. DOD’s comments are reprinted in Appendix III. DOD also provided technical comments, which were addressed throughout the report as appropriate. We are sending copies of this report to the Secretary of Defense; the Secretary of the Army; and the Director, Office of Management and Budget. Copies will also be made available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me on (202) 512-4841 if you or your staff has any questions concerning this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other contributors to this report were Assistant Director William R. Graveline, Noah B. Bleicher, Lily J. Chin, Brendan S. Culley, Michael D. O’Neill, Kenneth E. Patton, and Thomas P. Twambly. To identify factors that led to the Army’s decision to use an LSI for the FCS program and to determine the work performed by the LSI, we performed the following: We obtained and analyzed the program documents including the FCS system development and demonstration contract, statement of work, Army FCS acquisition strategy report, and FCS operational requirements document to gain an understanding of the terms and conditions of LSI responsibilities, the structure and processes of the program, and goals of the Army. We reviewed FCS subcontracts to understand the nature of the FCS one team and to ascertain how Federal Acquisition Regulation clauses were flowed down after FCS contract conversion. We reviewed Army audits, Defense Contract Management Agency and Defense Contract Audit Agency reports, as well as GAO reports and testimonies. We reviewed Army and DOD guidance as well as the Federal Acquisition Regulation to understand government contracting standards and procedures used for the acquisition of major weapon systems. We interviewed FCS program officials from the Tank-Automotive and Armaments Command, DOD’s Office of Acquisitions, Technology, and Logistics, Defense Contract Audit Agency, and Defense Contract Management Agency to gain insight into why the Army chose and LSI business arrangement, how it is performing, and potential concerns for the future. We interviewed one team partner officials from 12 of the major platform development offices to receive feedback from those implementing the program decisions made by the LSI. These discussions focused on differences, benefits, and drawbacks of the LSI business approach when compared to more traditional, prime contractor arrangements. As many of these firms have extensive experience in defense contracting, we spoke about alternatives that the Army could have used for the FCS procurement. Finally, these discussions allow us to gain insight into the implementation and impact of the other transaction agreement to FAR contract conversion. To evaluate the implications of the Army’s relationship with the LSI, we performed the following: We reviewed and collected information from the acquisition strategy report, statement of work, the FAR-based contract, documents related to source selection decisions, the integrated master schedule and documents related to the in-process preliminary design review, and the operational requirements document to identify the roles and responsibilities of the Army and LSI. We interviewed key Army and LSI program managers, who were responsible for the overall FCS program, the Army and LSI leaders of the major integrated product development teams and met with selected officials from the first tier of major subcontractors to assess communication and decision making within the program. To evaluate the Army’s criteria for assessing the LSI’s performance, we conducted the following: We reviewed the financial terms of the contract, the criteria for assessing the LSI’s performance at program incentive events contained in the contract and integrated master plan and conducted quantitative analyses of the contract’s fixed and incentive fees; We reviewed the LSI’s presentations for the Army’s assessment and also interviewed Army officials, who were responsible for reviewing the LSI’s performance. To evaluate the program’s financial reporting systems, we interviewed officials from the Defense Contract Management Agency and Defense Contract Audit Agency. To accomplish our work, we visited and interviewed officials from the Army Tank and Automotive Command, Warren, Mich.; Army integrated product team leaders in Huntsville, Ala., Hazelwood, Mo., Fort Picatinny, N.J.; LSI officials, in Hazelwood, Mo. and Huntington Beach, Calif. In addition, we interviewed 12 one team partners across the United States. We also interviewed officials from the Defense Contract Management Agency, Defense Contract Audit Agency and the Office of the Secretary of Defense’s Cost Analysis Improvement Group. We conducted our review between May 2006 and June 2007 in accordance with generally accepted government accounting standards. The Federal Acquisition Regulation (FAR) provides uniform policies and procedures for acquisitions by federal government executive agencies. Depending on the type of contract entered into, different FAR clauses and provisions are used to protect the government’s interests and define the terms of the agreement. Likewise, contracting officers structure a contract appropriately depending on the products or services being procured. A multitude of FAR provisions and agency FAR supplement provisions give contracting officers a wide range of options to tailor government contracts to meet the specific agency needs. While many FAR clauses are required to be incorporated in all contracts of a particular type, other provisions are only required to be included as applicable. The Army’s original FCS Other Transaction Agreement was converted into a FAR-based cost-reimbursable research and development contract in 2006. According to the Army, the new FCS contract includes the FAR and Defense Federal Acquisition Regulation Supplement (DFARS) requirements appropriate for this type of procurement. While GAO confirmed the Army’s analysis of the FAR-based contractual provisions, it did not conduct an independent detailed examination of every applicable clause in the contract. However, GAO did confirm the inclusion of several FAR requirements that address areas of key concern. Cost Accounting Standards (CAS) – Two FAR part 12 provisions pertaining to the use and administration of CAS have been included in the FCS contract. Procurement Integrity Act (PIA) – The FCS contract includes the two FAR clauses required to address PIA concerns. Truth in Negotiations Act (TINA) – TINA standards for cost and pricing data are addressed in three FAR part 12 provisions. Additional information regarding exceptions and requirements for cost and pricing data are included separately in the FCS contract. Organizational Conflict of Interest (OCI) – Although the predecessor other transaction agreement contained an OCI clause that required certain safeguards be put into place if and when Boeing and SAIC competed for subcontracts, it did not preclude them from such competitions. The FCS FAR contract includes an OCI provision that precludes the Boeing/SAIC LSI team from competing for any FCS subcontract awards. Though FCS subcontractors may compete for additional FCS subcontracts, the OCI provision in the FCS contract requires that steps be taken to ensure an absence of any organizational conflicts of interest during subcontractor selection activities. Additionally, this clause provides instruction on how proprietary information should be protected. | The Army's Future Combat Systems (FCS) program features multiple new systems linked by a first-of-a-kind information network. The Army contracted with a lead systems integrator (LSI) for FCS that could serve in a more expansive role than a typical prime contractor would. In response to a congressional mandate, this report addresses (1) why the Army decided to employ an LSI for the FCS program; (2) the nature of the LSI's working relationship with the Army; and (3) how FCS contract fees, provisions, and incentives work. In conducting its work, GAO reviewed extensive program documentation and held discussions with key officials at DOD and throughout the FCS program. In 2003, the Army contracted with an LSI for FCS because of the program's ambitious goals and the Army's belief that it did not have the capacity to manage the program. The original timeframe for FCS's development was a shorter time frame than for an individual weapon system program, let alone a complex systems-of-systems program with a high number of immature technologies at program start. The Army realized that its compartmentalized workforce did not lend itself to the kind of crosscutting work that the FCS program would demand. The Army workforce also did not have the expertise needed to develop the FCS information network or enough people to support the program had it been organized into separate program offices. In contracting with the Boeing Company as LSI, the Army believed it found a management partner who could define and develop FCS and reach across the Army's organizations. Boeing subcontracted with another company, Science Applications International Corporation, to assist with its responsibilities as LSI. The working relationship between the LSI and the Army is complex. The LSI is a traditional contractor in terms of developing a product for its customer, the Army, but also serves like a partner to the Army in management of the FCS program. In its management role, the LSI makes decisions collaboratively with the Army. An advantage of this arrangement is that the LSI and Army can maintain flexibility when dealing with shifting priorities. However, that relationship may pose significant risks to the Army's ability to provide oversight over the long term. The Office of the Secretary of Defense is in a position to provide this oversight but thus far has allowed the Army to depart significantly from best practices and the Office's own policy for weapon system acquisitions. For example, the Office of the Secretary of Defense has also allowed the Army to use its own cost estimates rather than independent--and significantly higher--cost estimates when submitting budget requests. The Army's experience with the LSI on the FCS program may provide the Office of the Secretary of Defense insights on broader acquisition management issues. The Army has structured the FCS contract consistent with its desire to incentivize development efforts. The definitized cost-reimbursable research and development contract valued at $17.5 billion contains up to a 15 percent total fixed/incentive fee, or about $2.3 billion. As with many research and development contracts, the FCS contract obligates the contractor to put forth its best efforts, but does not assure successful outcomes. Assuming that critical design review is completed in 2011, the Army will have paid the LSI over 80 percent to cover the contract costs, plus a possible 80 percent of its fee or profit. GAO has previously reported that most cost growth in DOD weapon system programs occurs after critical design review. Therefore, it is possible for the LSI to have garnered most of its payouts in costs and fees early next decade, even if despite its best efforts, the FCS capability ends up falling far short of the Army's goals. The Army notes that its fee structure is intended to encourage good performance early in the program. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The Navy, with reported assets totaling $321 billion in fiscal year 2004, would be ranked among the largest corporations in the world if it were a private sector entity. According to the Navy, based upon the reported value of its assets, it would be ranked among the 15 largest corporations on the Fortune 500 list. Additionally, in fiscal year 2004 the Navy reported that its inventory was valued at almost $73 billion and that it held property, plant, and equipment with a reported value of almost $156 billion. Furthermore, the Navy reported for fiscal year 2004 that its operations involved total liabilities of $38 billion, that its operations had a net cost of $130 billion, and that it employed approximately 870,000 military and civilian personnel—including reserve components. The primary mission of the Navy is to control and maintain freedom of the seas, performing an assortment of interrelated and interdependent business functions to support its military mission with service members and civilian personnel in geographically dispersed locations throughout the world. To support its military mission and perform its business functions, the Navy requested for fiscal year 2005 almost $3.5 billion for the operation, maintenance, and modernization of its business systems and related infrastructure—the most of all the DOD components—or about 27 percent of the total $13 billion DOD fiscal year 2005 business systems budget request. Of the 4,150 reported DOD business systems, the Navy holds the largest inventory of business systems—with 2,353 reported systems or 57 percent of DOD’s reported inventory of business systems. The Secretary of Defense recognized that the department’s business operations and systems have not effectively worked together to provide reliable information to make the most effective business decisions. He challenged each military service to transform its business operations to support DOD’s warfighting capabilities and initiated the Business Management Modernization Program (BMMP) in July 2001. Further, the Assistant Secretary of the Navy for Financial Management and Comptroller (Navy Comptroller) testified that transforming the Navy’s business processes, while concurrently supporting the Global War on Terrorism, is a formidable but essential task. He stated that the goal of the transformation is to “establish a culture and sound business processes that produce high-quality financial information for decision making.” One of the primary elements of the Navy’s business transformation strategy is the Navy ERP. The need for business processes and systems transformation to provide management with timely information to make important business decisions is clear. However, none of the military services, including the Navy, have passed the scrutiny of an independent financial audit. Obtaining a clean (unqualified) financial audit opinion is a basic prescription for any well- managed organization, as recognized by the President’s Management Agenda. For fiscal year 2004, the DOD Inspector General issued a disclaimer on the Navy’s financial statements—Navy’s General Fund and Working Capital Fund—citing eight material weaknesses and six material weaknesses respectively, in internal control and noncompliance with the Federal Financial Management Integrity Act of 1996 (FFMIA). The inability to obtain a clean financial audit opinion is the result of weaknesses in the Navy’s financial management and related business processes and systems. Most importantly, the Navy’s pervasive weaknesses have (1) resulted in a lack of reliable information to make sound decisions and report on the status of activities, including accountability of assets, through financial and other reports to the Navy and DOD management and the Congress; (2) hindered its operational efficiency; (3) adversely affected mission performance; and (4) left the Navy and DOD vulnerable to fraud, waste, and abuse, as the following examples illustrate. The Navy’s lack of detailed cost information hinders its ability to monitor programs and analyze the cost of its activities. We reported that the Navy lacked the detailed cost and inventory data needed to assess its needs, evaluate spending patterns, and leverage its telecommunications buying power. As a result, at the sites we reviewed, the Navy paid for telecommunications services it no longer required, paid too much for services it used, and paid for potentially fraudulent or abusive long-distance charges. In one instance, we found that DOD paid over $5,000 in charges for one card that was used to place 189 calls in one 24-hour period from 12 different cities to 12 different countries. Ineffective controls over Navy foreign military sales using blanket purchase orders placed classified and controlled spare parts at risk of being shipped to foreign countries that may not be eligible to receive them. For example, we identified instances in which Navy country managers (1) overrode the system to release classified parts under blanket purchase orders without filing required documentation justifying the release; and (2) substituted classified parts for parts ordered under blanket purchase orders, bypassing the control-edit function of the system designed to check a country’s eligibility to receive the parts. The Naval Inventory Control Point and its repair contractors have not followed DOD and Navy procedures intended to provide the accountability for and visibility of inventory shipped to Navy repair contractors. Specifically, Navy repair contractors are not routinely acknowledging receipt of government-furnished material received from the Navy. A DOD procedure requires repair contractors to acknowledge receipt of government-furnished material that has been shipped to them from the Navy’s supply system. However, Naval Inventory Control Point officials are not requiring repair contractors to acknowledge receipt of these materials. By not requiring repair contractors to acknowledge receipt of government-furnished material, the Naval Inventory Control Point has also departed from the procedure to follow up with the contractor within 45 days when the contractor fails to confirm receipt for an item. Without material receipt notification, the Naval Inventory Control Point cannot be assured that its repair contractors have received the shipped material. This failure to acknowledge receipt of material shipped to repair contractors can potentially impair the Navy’s ability to account for shipments leading to possible fraud, waste, or abuse. A limited Naval Audit Service audit revealed that 53 of 118 erroneous payment transactions, valued at more than $990,000, occurred because Navy certifying officials did not ensure accurate information was submitted to the Defense Finance and Accounting Service (DFAS) prior to authorizing payment. In addition, certifying officials submitted invoices to DFAS authorizing payment more than once for the same transaction. Brief Overview of Navy ERP To address the need for business operations reform, in fiscal year 1998 the Navy established an executive committee responsible for creating a “Revolution in Business Affairs” to begin looking at transforming business affairs and identifying areas of opportunity for change. This committee, in turn, set up a number of working groups, including one called the Commercial Business Practices (CBP) Working Group, which consisted of representatives from financial management organizations across the Navy. This working group recommended that the Navy should use ERP as a foundation for change and identified various ERP initiatives that were already being developed or under consideration within the Navy. Ultimately, the Navy approved the continuation of four of these initiatives, using funds from existing resources from each of the sponsors (i.e., commands) to test the feasibility of ERP solutions within the Navy. From 1998 to 2003, four different Navy commands began planning, developing, and implementing four separate ERP pilot programs to address specific business areas. A CBP Executive Steering Group was created in December 1998 to monitor the pilot activities. As the pilots progressed in their development and implementation, the Navy identified issues that had to be addressed at a higher level than the individual pilots, such as the integration between the pilots as well as with other DOD systems, and decided that one program would provide a more enterprisewide solution for the Navy. In August 2002, the Assistant Secretary of the Navy for Research, Development, and Acquisition established a Navy-wide ERP program to “converge” the four ongoing pilots into a single program. This Navy-wide program is expected to replace all four pilots by fiscal year 2008 and to be “fully operational” by fiscal year 2011. The Navy estimates that the ERP will manage about 80 percent of the Navy’s estimated appropriated funds—after excluding appropriated funds for the Marine Corps and military personnel and pay. Based on the Navy’s fiscal years 2006 to 2011 defense planning budget, the Navy ERP will manage approximately $74 billion annually. According to a Navy ERP official, while the Navy ERP would account for the total appropriated amount, once transactions occur at the depots, such as when a work order is prepared for the repair of an airplane part, the respective systems at the depots will execute and maintain the detailed transactions. This accounts for about 2 percent, or approximately $1.6 billion, being executed and maintained in detail by the respective systems at the aviation and shipyard depots—not by the Navy ERP. The remaining 20 percent that the ERP will not manage comprises funds for the Navy Installations Command, field support activity, and others. Each of the Navy’s four ERP pilot projects was managed and funded by different major commands within the Navy. The pilots, costing over $1 billion in total, were limited in scope and were not intended to provide corporate solutions to any of the Navy’s long-standing financial and business management problems. The lack of centralized management oversight and control over all four pilots allowed the pilots to be developed independently. This resulted in four more DOD stovepiped systems that could not operate with each other, even though each carried out many of the same functions and were based on the same ERP commercial-off-the- shelf (COTS) software. Moreover, due to the lack of high-level departmentwide oversight from the start, the pilots were not required to go through the same review process as other acquisition projects of similar magnitude. Four separate Navy organizations began their ERP pilot programs independently of each other, at different times, and with separate funding. All of the pilots implemented the same ERP COTS software, and each pilot was small in scale—relative to the entire Navy. For example, one of the pilots, SMART, was responsible for managing the inventory items and repair work associated with one type of engine, although the organization that implemented SMART—the Naval Supply Systems Command— managed the inventory for several types of engines. As of September 2004, the Navy estimated that the total investment in these four pilots was approximately $1 billion. Table 1 summarizes each of the pilots, the cognizant Navy organization, the business areas they address, and their reported costs through September 2004. Even after the pilots came under the purview of the CBP Executive Steering Group in December 1998, they continued to be funded and controlled by their respective organizations. We have previously reported that allowing systems to be funded and controlled by component organizations has led to the proliferation of DOD’s business systems. These four pilots are prime examples. While there was an attempt made to coordinate the pilots, ultimately each organization designed its ERP pilot to accommodate its specific business needs. The Navy recognized the need for a working group that would focus on integration issues among the pilots, especially because of the desire to eventually extend the pilot programs beyond the pilot organizations to the entire Navy. In this regard, the Navy established the Horizontal Integration Team in June 1999, consisting of representatives from all of the pilots to address this matter. However, one Navy official described this team as more of a “loose confederation” that had limited authority. As a result, significant resources have been invested that have not and will not result in corporate solutions to any of the Navy’s long-standing business and financial management problems. This is evident as noted in the DOD Inspector General’s audit reports of the Navy’s financial statements discussed above. In addition to the lack of centralized funding and control, each of the pilots configured the software differently, which, according to Navy ERP program officials, caused integration and interoperability problems. While each pilot used the same COTS software package, the software offers a high degree of flexibility in how similar business functions can be processed by providing numerous configuration points. According to the Navy, over 2.4 million configuration points exist within the software. The pilots configured the software differently from each other to accommodate differences in the way they wanted to manage their functional area focus. These differences were allowed even though they perform many of the same types of business functions, such as financial management. These configuration differences include the levels of complexity in workflow activities and the establishment of the organizational structure. For example, the primary work order managed by the NEMAIS pilot is an intricate ship repair job, with numerous tasks and workers at many levels. Other pilots had much simpler work order definitions, such as preparing a budget document or procuring a single part for an engine. Because of the various inconsistencies in the design and implementation, the pilots were stovepiped and could not operate with each other, even though they performed many of the same business functions. Table 2 illustrates the similar business functions that are performed by more than one pilot. By definition, an ERP solution should integrate the financial and business operations of an organization. However, the lack of a coordinated effort among the pilots led to a duplication of efforts and problems in implementing many business functions and resulted in ERP solutions that carry out redundant functions in different ways from one another. The end result of all of the differences was a “system” that could not successfully process transactions associated with the normal Navy practices of moving ships and aircraft between fleets. Another configuration problem occurred because the pilots generally developed custom roles for systems users. Problems arose after the systems began operating. Some roles did not have the correct transactions assigned to enable the users with that role to do their entire job correctly. Further, other roles violated the segregation-of-duties principle due to the inappropriateness of roles assigned to individual users. The pilots experienced other difficulties with respect to controlling the scope and performance schedules due to the lack of disciplined processes, such as requirements management. For example, the pilots did not identify in a disciplined manner the amount of work necessary to achieve the originally specified capabilities—even as the end of testing approached. There were repeated contract cost-growth adjustments, delays in delivery of many planned capabilities, and initial periods of systems’ instabilities after the systems began operating. All of these problems have been shown as typical of the adverse effects normally associated with projects that have not effectively implemented disciplined processes. The Navy circumvented departmentwide policy by not designating the pilots as major automated information systems acquisition programs. DOD policy in effect at the time stipulated that a system acquisition should be designated as a major program if the estimated cost of the system exceeds $32 million in a single year, $126 million in total program costs, or $378 million in total life-cycle costs, or if deemed of special interest by the DOD Chief Information Officer (CIO). According to the Naval Audit Service, all four of the pilots should have been designated as major programs based on their costs—which were estimated to be about $2.5 billion at the time—and their significance to Navy’s operations. More specifically, at the time of its review, SMART’s total estimated costs for development, implementation, and sustainment was over $1.3 billion—far exceeding the $378 million life- cycle cost threshold. However, because Navy management considered each of its ERP programs to be “pilots,” it did not designate the efforts as major automated information systems acquisitions, thereby limiting departmental oversight. Consistent with the Clinger-Cohen Act of 1996, DOD acquisition guidance requires that certain documentation be prepared at each milestone within the system life cycle. This documentation is intended to provide relevant information for management oversight and in making decisions as to whether the investment of resources is cost beneficial. The Naval Audit Service reported that a key missing document that should have been prepared for each of the pilots was a mission needs statement. A mission needs statement was required early on in the acquisition process to describe the projected mission needs of the user in the context of the business need to be met. The mission needs statement should also address interoperability needs. As noted by the Naval Audit Service, the result of not designating the four ERP pilots as major programs was that program managers did not prepare and obtain approval of this required document before proceeding into the next acquisition phase. In addition, the pilots did not undergo mandatory integrated reviews that assess where to spend limited resources departmentwide. The DOD CIO is responsible for overseeing major automated information systems and a program executive office is required to be dedicated to executive management and not have other command responsibilities. However, because the pilots were not designated major programs, the oversight was at the organizational level that funded the pilots (i.e., command level). Navy ERP officials stated that at the beginning of the pilots, investment authority was dispersed throughout the Navy and there was no established overall requirement within the Navy to address systems from a centralized Navy enterprise level. The Navy ERP is now designated a major program under the oversight of the DOD CIO. The problems identified in the failed implementation of the four pilots are indicative of a system program that did not adhere to the disciplined processes. The successful development and implementation of systems is dependent on an organization’s ability to effectively implement best practices, commonly referred to as disciplined processes, which are essential to reduce the risks associated with these projects to acceptable levels. However, the inability to effectively implement the disciplined processes necessary to reduce risks to acceptable levels does not mean that an entity cannot put in place a viable system that is capable of meeting its needs. Nevertheless, history shows that the failure to effectively implement disciplined processes and the necessary metrics to understand the effectiveness of processes implemented increases the risk that a given system will not meet its cost, schedule, and performance objectives. In past reports we have highlighted the impact of not effectively implementing the disciplined processes. These results are consistent with those experienced by the private sector. More specifically: In April 2003, we reported that NASA had not implemented an effective requirements management process and that these requirement management problems adversely affected its testing activities. We also noted that because of the testing inadequacies, significant defects later surfaced in the production system. In May 2004, we reported that NASA’s new financial management system, which was fully deployed in June 2003 as called for in the project schedule, still did not address many of the agency’s most challenging external reporting issues, such as external reporting problems related to property accounting and budgetary accounting. The system continues to be unable to produce reliable financial statements. In May 2004, we reported that the Army’s initial deployments for its Logistics Modernization Program (LMP) did not operate as intended and experienced significant operational difficulties. In large part, these operational problems were due to the Army not effectively implementing the disciplined processes that are necessary to manage the development and implementation of the systems in the areas of requirements management and testing. The Army program officials have acknowledged that the problems experienced in the initial deployment of LMP could be attributed to requirements and testing. Subsequently, in June 2005, we reported that the Army still had not put into place effective management control and processes to help ensure that the problems that have been identified since LMP became operational in July 2003 are resolved in an efficient and effective manner. The Army’s inability to effectively implement the disciplined processes provides it with little assurance that (1) system problems experienced during the initial deployment that caused the delay of future deployments have been corrected and (2) LMP is capable of providing the promised system functionality. The failure to resolve these problems will continue to impede operations at Tobyhanna Army Depot, and future deployment locations can expect to experience similar significant disruptions in their operations, as well as having a system that is unable to produce reliable and accurate financial and logistics data. We reported in February 2005 that DOD had not effectively managed important aspects of the requirements for the Defense Integrated Military Human Resources System, which is to be an integrated personnel and pay system standardized across all military components. For example, DOD had not obtained user acceptance of the detailed requirements nor had it ensured that the detailed requirements were complete and understandable. Based on GAO’s review of a random sample of the requirements documentation, about 77 percent of the detailed requirements were difficult to understand. The problems experienced by DOD and other agencies are illustrative of the types of problems that can result when disciplined processes are not properly implemented. The four Navy pilots provide yet another example. As discussed previously, because the pilots were four stovepiped efforts, lacking centralized management and oversight, the Navy had to start over when it decided to proceed with the current ERP effort after investing about $1 billion. Figure 1 shows how organizations that do not effectively implement disciplined processes lose the productive benefits of their efforts as a project continues through its development and implementation cycle. Although undisciplined projects show a great deal of productive work at the beginning of the project, the rework associated with defects begins to consume more and more resources. In response, processes are adopted in the hopes of managing what later turns out to be, in reality, unproductive work. However, generally these processes are “too little, too late,” and rework begins to consume more and more resources because the adequate foundations for building the systems were not done or not done adequately. In essence, experience shows that projects that fail to implement disciplined processes at the beginning are forced to implement them later, when it takes more time and they are less effective. As can be seen in figure 1, a major consumer of project resources in undisciplined efforts is rework (also known as thrashing). Rework occurs when the original work has defects or is no longer needed because of changes in project direction. Disciplined organizations focus their efforts on reducing the amount of rework because it is expensive. Studies have shown that fixing a defect during testing is anywhere from 10 to 100 times more expensive than fixing it during the design or requirements phase. To date, Navy ERP management has followed a comprehensive and disciplined requirements management process, as well as leveraged lessons learned from the implementation of the four ERP pilot programs to avoid repeating past mistakes. Assuming that the project continues to effectively implement the processes it has adopted, the planned functionality of the Navy ERP has the potential to address at least some of the weaknesses identified in the Navy’s financial improvement plan. However, the project faces numerous challenges and risks. Since the program is still in a relatively early phase—it will not be fully operational until fiscal year 2011, at a currently estimated cost of $800 million—the project team must be continually vigilant and held accountable for ensuring that the disciplined processes are followed in all phases to help achieve overall success. For example, the project management office will need to ensure that it effectively oversees the challenges and risks associated with developing interfaces with 44 Navy and DOD systems and data conversion—areas that were troublesome in other DOD efforts we have audited. Considering that the project is in a relatively early phase and DOD’s history of not implementing systems on time and within budget, the projected schedule and costs estimates are subject to, and very likely will, change. Furthermore, a far broader challenge, which lies outside the immediate control of the Navy ERP program office, is that the ERP is proceeding without DOD having clearly defined its BEA. As we have recently reported, DOD’s BEA still lacks many of the key elements of a well-defined architecture. The real value of a BEA is that it provides the necessary content for guiding and constraining system investments in a way that promotes interoperability and minimizes overlap and duplication. Without it, rework will likely be needed to achieve those outcomes. Although the four pilot projects were under the control of different entities and had different functional focuses, a pattern of issues emerged that the Navy recognized as being critical for effective development of future projects. The Navy determined that the pilots would not meet its overall requirements and concluded that the best alternative was to develop a new ERP system—under the leadership of a central program office—and use efforts from the pilots as starting points by performing a review of their functionality and lessons learned, eliminating redundancies, and developing new functionalities that were not addressed by the pilots. The lessons learned from the pilots cover technical, organizational, and managerial issues and reinforce the Navy’s belief that it must effectively implement the processes that are necessary to effectively oversee and manage the ERP efforts. Navy ERP project management recognizes that the failure to do so would, in all likelihood, result in this ERP effort experiencing the same problems as those resulting in the failure of the four earlier pilots. One of the most important lessons learned from the earlier experiences by the Navy ERP project management is the need for following disciplined processes to identify and manage requirements. As discussed later in this report, the ERP program is following best practices in managing the system’s requirements. A key part of requirements identification is to have system users involved in the process to ensure that the system will meet their needs. Additionally, the inclusion of system users in the detailed requirement development process creates a sense of ownership in the system, and prepares system users for upcoming changes to the way they conduct their business. Moreover, the experience from the pilots demonstrated that the working-level reviews must be cross functional. For example, the end-to-end process walkthroughs, discussed later, reinforce the overall business effect of a transaction throughout the enterprise, and help to avoid a stovepiped view of an entity’s operations. Another lesson learned is the need to adopt business processes to conform with the types of business practices on which the standard COTS packages are based, along with the associated transaction formats. Just the opposite approach was pursued for the pilots, during which the Navy customized many portions of the COTS software to match the existing business process environment. However, the current Navy ERP management is restraining customization to the core COTS software to allow modifications only where legal or regulatory demands require. Obviously, minimizing the amount of customization reduces the complexity and costs of development. Perhaps more importantly, holding customization to a minimum helps an entity take advantage of two valuable benefits of COTS software. First, COTS software provides a mature, industry-proven “best practices” approach to doing business. The core elements of work-flow management, logistics, financial management, and other components have been optimized for efficiency and standardization in private industry over many years. According to program officials, the Navy ERP will adhere to the fundamental concepts of using a COTS package and thus take advantage of this efficiency benefit by modifying their business practices to match the COTS software rather than vice versa as was done in the four pilots. Having the software dictate processes is a difficult transition for users to accept, and Navy ERP officials recognize the challenge in obtaining buy-in from system users. To meet this challenge, they are getting users involved early in requirements definition, planning for extensive training, and ensuring that senior level leadership emphasize the importance of process change, so the entire chain of command understands and accepts its role in the new environment. In effect, the Navy is taking the adopted COTS process and then presenting it to the users. As a result, the Navy is attempting to limit the amount of customization of the software package. One important consideration in doing this is that if the standard COTS components are adopted, the maintenance burden of upgrades remains with the COTS vendor. Finally, the Navy learned from the pilots that it needed to manage its system integrators better. The ERP officials also found that they could significantly reduce their risk by using the implementation methodology of the COTS vendor rather than the specific approach of a system integrator. Each of the pilots had separate system integrators with their own particular methodology for implementing the COTS software. According to Navy ERP officials, using the implementation methodology and tool set of the COTS vendor maintains a closer link to the underlying software, and provides more robust requirements management by easily linking requirements from the highest level down to the COTS transaction level. Navy ERP is focused on staying as close as possible to the delivered COTS package, both in its avoidance of customization and its use of tools provided by the COTS vendor. In contrast, with the pilots, the Navy allowed the system integrators more latitude in the development process, relying on their expertise and experience with other ERP efforts to guide the projects. Navy ERP management realized they needed to maintain much better control over the integrators’ work. As a result, the Navy established the Strategy, Architecture, and Standards Group to structure and guide the effort across the Navy. Our review found that the ERP development team has so far followed an effective process for managing its requirements development. Documentation was readily available for us to trace selected requirements from the highest level to the lowest, detailed transaction level. This traceability allows the user to follow the life of the requirement both forward and backward through the documentation, and from origin through implementation. Traceability is also critical to understanding the parentage, interconnections, and dependencies among the individual requirements. This information in turn is critical to understanding the impact when a requirement is changed or deleted. Requirements represent the blueprint that system developers and program managers use to design, develop, test, and implement a system. Improperly defined or incomplete requirements have been commonly identified as a cause of system failure and systems that do not meet their cost, schedule, or performance goals. Without adequately defined requirements that have been properly reviewed and tested, significant risk exists that the system will need extensive and costly changes before it will achieve its intended capability. Because requirements provide the foundation for system testing, specificity and traceability defects in system requirements preclude an entity from implementing a disciplined testing process. That is, requirements must be complete, clear, and well documented to design and implement an effective testing program. Absent this, an organization is taking a significant risk that its testing efforts will not detect significant defects until after the system is placed into production. Industry experience indicates that the sooner a defect is recognized and corrected, the cheaper it is to fix. As shown in figure 2, there is a direct relationship between requirements and testing. Although the actual testing activities occur late in the development cycle, test planning can help disciplined activities reduce requirements-related defects. For example, developing conceptual test cases based on the requirements derived from the concept of operations and functional requirements stages can identify errors, omissions, and ambiguities long before any code is written or a system is configured. Disciplined organizations also recognize that planning testing activities in coordination with the requirements development process has major benefits. As we have previously reported, failure to effectively manage requirements and testing activities has posed operational problems for other system development efforts. The Navy ERP requirements identification process began with formal agreement among the major stakeholders on the scope of the system, followed by detailed, working-level business needs from user groups and legacy systems. The high-level business or functional requirements identified initially are documented in the Operational Requirements Document (ORD). The ORD incorporates requirements from numerous major DOD framework documents and defines the capabilities that the system must support, including business operation needs such as acquisition, finance, and logistics. In addition, the ORD also identifies the numerous policy directives to which the Navy ERP must conform, such as numerous DOD infrastructure systems, initiatives, and policies. The ORD was distributed to over 150 Navy and DOD reviewers. It went through seven major revisions to incorporate the comments and suggestions provided by the reviewers before being finalized in April 2004. According to Navy ERP program officials, any requested role for the Navy ERP to perform that was not included in the ORD will not be supported. This is a critical decision that reduces the project’s risks since “requirements creep” is another cause of projects that do not meet their cost, schedule, and performance objectives. We selected seven requirements from the ORD that related to specific Navy problem areas, such as financial reporting and asset management, and found that the requirements had the expected attributes, including the necessary detail one would normally expect to find for the requirement being reviewed. For example, a requirement stated that the ERP will provide reports of funds expended versus funds allocated. We found this requirement was described in a low-level requirement document called a Customer Input Template, which included a series of questions that must be addressed. The documentation further detailed the standard reports that were available based on the selection of configuration options. Further, the documentation of the detailed requirements identified the specific COTS screen number that would be used and described the screen settings that would be used when a screen was “activated.” While the ORD specifies the overall capabilities of the system at a high level, more specific, working-level requirements also had to be developed to achieve a usable blueprint for configuration and testing of the system. To develop these lower-level requirements, the Navy ERP project held detailed working sessions where requirements and design specifications were discussed, refined, formalized, and documented. Each high-level requirement was broken down into its corresponding business processes, which in turn drove the selection of transactions (COTS functions) to be used for configuration of the software. For each selected transaction, comprehensive documentation was created to capture the source information that defines how and why a transaction must be configured. This documentation is critical for ensuring accurate configuration of the software, as well as for testing the functionality of the software after configuration. Table 3 describes the kinds of documentation used to maintain these lower-level detailed requirements. Additionally, the Navy ERP program is using a requirements management tool containing a database that links each requirement from the highest to the lowest level and maintains the relationship between the requirements. This tool helps to automate the linkage between requirements and helps provide the project staff reasonable assurance that its stated processes have been effectively implemented. This linkage is critical to understanding the scope of any potential change. For example, the users can utilize the tool to (1) determine the number of transactions affected by a proposed change and (2) identify the detailed documentation necessary for understanding how this change will affect each business process. To further ensure that the individual transactions ultimately support the adopted business process, Navy ERP officials conducted master business scenarios or end-to-end process walkthroughs. This end-to-end view of the business process ensures that the business functionality works across the various subsystems of the COTS package. For instance, the requirements for a purchase order could be viewed simply from the vantage point of a logistics person or the acquisition community. However, a purchase order also has financial ramifications, and therefore must be posted to financial records, such as the general ledger. The master business scenarios provide a holistic review of the business process surrounding each transaction. The Navy expects the new ERP project to address a number of the weaknesses cited in the Department of the Navy Financial Improvement Plan—a course of action directed towards achieving better financial management and an unqualified audit opinion for the Department of the Navy annual financial statements. According to ERP officials, the COTS software used for the ERP program will improve the Navy’s current financial controls in the areas of asset visibility, financial reporting, and full cost accounting. However, the currently planned ERP is not intended to provide an all-inclusive end-to-end corporate solution for the Navy. The COTS software offers the potential for real-time asset visibility for the Navy, limited by two factors beyond the project’s scope. First, items in transit fall under the authority of the U.S. Transportation Command (TRANSCOM). Once the Navy hands off an item to TRANSCOM, it does not retain visibility of that asset until it arrives at another Navy location. The second factor is the limited ability for communication with ships at sea. Once the currently planned ERP is fully implemented, it will cover all inventories, including inventory on ships. However, the data for shipboard inventory will be current only as of when the ship leaves port. Those data will typically not be updated until the ship docks in another port and can transmit updated information to the ERP system. This lag time for some ships could be as much as 3 to 4 months. While the ERP has the capability to maintain real-time shipboard inventory, the Navy has yet to decide whether to expand the scope of the ERP and build an interface with the ships, which could be extensive and costly, or install the ERP on the ships. Both options present additional challenges that necessitate thorough analysis of all alternatives before a decision is made. According to the program office, a time frame for making this critical decision has not been established. The COTS software is also intended to provide standardized government and proprietary financial reporting at any level within the defined organization. According to Navy ERP officials, full cost accounting will be facilitated by a software component integrated with the ERP. For example, the Navy expects that this component will provide up-to-date cost information—including labor, materials, and overhead—for its numerous, and often complicated, maintenance jobs. Full cost information is necessary for effective management of production, maintenance, and other activities. According to Navy ERP program officials, when fully operational in fiscal year 2011, the Navy ERP will be used by organizations comprising approximately 80 percent of Navy’s estimated appropriated funds—after excluding the Marine Corps and military pay and personnel. Based on fiscal years’ 2006 through 2011 defense planning budget, the Navy ERP will manage approximately $74 billion annually. The organizations that will use Navy ERP include the Naval Air Systems, the Naval Sea Systems, the Naval Supply Systems, the Space and Naval Warfare Systems, and the Navy Facilities Engineering Commands, as well as the Office of Naval Research, the Atlantic and Pacific Fleets, and the Strategic Systems Programs. However, the Navy ERP will not manage in detail all of the 80 percent. About 2 percent, or approximately $1.6 billion, will be executed and maintained in detail by respective financial management systems at the aviation and shipyard depots. For example, when a work order for a repair of an airplane part is prepared, the respective financial management system at the depot will execute and maintain the detailed transactions. The remaining 20 percent that the Navy ERP will not manage comprises the Navy Installations Command, field support activities, and others. Navy ERP officials have indicated that it is the Navy’s intent to further expand the system in the future to include the aviation and shipyard depots, but definite plans have not yet been made. According to Navy ERP officials, the software has the capability to be used at the aviation and shipyard depots, but additional work would be necessary. For example, the desired functionality and related requirements—which as discussed above, are critical to the success of any project—would have to be defined for the aviation and shipyard depots. While the Navy’s requirements management process is following disciplined processes and comprises one critical aspect of the overall project development and implementation, by itself, it is not sufficient to provide reasonable assurance of the ERP’s success. Going forward, the Navy faces very difficult challenges and risks in the areas of developing and implementing 44 system interfaces with other Navy and DOD systems, and accurately converting data from the existing legacy systems to the ERP. As previously noted, financial management is a high-risk area in the department and has been designated as such since 1995. One of the contributing factors has been DOD’s inability to develop integrated systems. As a result, the Navy is dependent upon the numerous interfaces to help improve the accuracy of its financial management data. Navy ERP program managers have recognized the issues of system interfaces and data conversion in their current list of key risks. They have identified some actions that need to be taken to mitigate the risks; however, they have not yet developed the memorandums of agreement with the owners of the systems which the Navy ERP will interface. According to the Navy ERP program office, they plan to complete these memorandums of agreement by October 2005. One of the long-standing problems within DOD has been the lack of integrated systems. This is evident in the many duplicative, stovepiped business systems among the 4,150 that DOD reported as belonging to its systems environment. Lacking integrated systems, DOD has a difficult time obtaining accurate and reliable information on the results of its business operations and continues to rely on either manual reentry of data into multiple systems, convoluted system interfaces, or both. These system interfaces provide data that are critical to day-to-day operations, such as obligations, disbursements, purchase orders, requisitions, and other procurement activities. Testing the system interfaces in an end-to-end manner is necessary in order for the Navy to have reasonable assurance that the ERP will be capable of providing the intended functionality. The testing process begins with the initial requirements development process. Furthermore, test planning can help disciplined activities reduce requirements-related defects. For example, developing conceptual test cases based on the requirements can identify errors, omissions, and ambiguities long before any code is written or a system is configured. The challenge now before Navy ERP is to be sure its testing scenarios accurately reflect the activities of the “real users,” and the dependencies of external systems. We previously reported that Sears and Wal-Mart, recognized as leading- edge inventory management companies, have automated systems that electronically receive and exchange standard data throughout the entire inventory management process, thereby reducing the need for manual data entry. As a result, information moves through the data systems with automated ordering of inventory from suppliers; receiving and shipping at distribution centers; and receiving, selling, and reordering at retail stores. Unlike DOD, which has a proliferation of nonintegrated systems using nonstandard data, Sears and Wal-Mart require all components and subsidiaries to operate within a standard systems framework that results in an integrated system and does not allow individual systems development. For the first deployment, the Navy has to develop interfaces that permit the ERP to communicate with 44 systems—27 that are Navy specific and 17 systems belonging to other DOD entities. Figure 3 illustrates the numerous required system interfaces. Long-standing problems regarding the lack of integrated systems and use of nonstandard data within DOD pose significant risks for the Navy ERP to successfully interface with these systems. Even if integration is successful, if the information within the 44 systems is not accurate and reliable, the overall information on Navy’s operation provided by the ERP to Navy management and the Congress will not be useful in the decision-making process. While the Navy ERP project office is working to develop agreements with system owners for the interfaces and has been developing the functional specifications for each system, officials acknowledged that, as of May 2005, they are behind schedule in completing the interface agreements due to other tasks. The Navy ERP is dependent on the system owners to achieve their time frames for implementation. For example, the Defense Travel System (DTS) is one of the DOD systems with which the Navy ERP is to interface and exchange data. DTS is currently being implemented, and any problems that result in a DTS schedule slippage will, in turn, affect Navy ERP’s interface testing. We have previously reported that the lack of system interface testing has seriously impaired the operation of other system implementation efforts. For example, in May 2004, we reported that because the system interfaces for the Defense Logistics Agency’s Business Systems Modernization (BSM) program and the Army’s LMP were not properly tested prior to deployment, severe operational problems were experienced. Such problems have led BSM, LMP, and organizations with which they interface—such as DFAS—to perform costly manual reentry of transactions, which can cause additional data integrity problems. For example: BSM’s functional capabilities were adversely affected because a significant number of interfaces were still in development or were being executed manually once the system became operational. Since the design of system interfaces had not been fully developed and tested, BSM experienced problems with receipts being rejected, customer orders being canceled, and vendors not being paid in a timely manner. At one point, DFAS suspended all vendor payments for about 2 months, thereby increasing the risk of late payments to contractors and violations of the Prompt Payment Act. In January 2004, the Army reported that due to an interface failure, LMP had been unable to communicate with the Work Ordering and Reporting Communications System (WORCS) since September 2003. WORCS is the means by which LMP communicates with customers on the status of items that have been sent to the depot for repair and initiates procurement actions for inventory items. The Army has acknowledged that the failure of WORCS has resulted in duplicative shipments and billings and inventory items being delivered to the wrong locations. Additionally, the LMP program office has stated that it has not yet identified the specific cause of the interface failure. The Army is currently entering the information manually, which, as noted above, can cause additional data integrity errors. Besides the challenge of developing the 44 interfaces, the Navy ERP must also develop the means to be compliant with DOD’s efforts to standardize the way that various systems exchange data with each other. As discussed in our July 2004 report, DOD is undertaking a huge and complex task (commonly referred to as the Global Information Grid or GIG) that is intended to integrate virtually all of DOD’s information systems, services, applications, and data into one seamless, reliable, and secure network. The GIG initiative is focused on promoting interoperability throughout DOD by building an Internet-like network for DOD-related operations based on common standards and protocols rather than on trying to establish interoperability after individual systems become operational. DOD envisions that this type of network would help ensure systems can easily and quickly exchange data and change how military operations are planned and executed since much more information would be dynamically available to users. DOD’s plans for realizing the GIG involve building a new core network and information capability and successfully integrating the majority of its weapon systems; command, control, and communications systems; and business systems with the new network. The effort to build the GIG will require DOD to make a substantial investment in a new set of core enterprise programs and initiatives. To integrate systems such as the Navy ERP into the GIG, DOD has developed (1) an initial blueprint or architecture for the GIG and (2) new policies, guidance, and standards to guide implementation. According to project officials, the Navy ERP system will be designed to support the GIG. However, they face challenges that can result in significant cost and schedule risks depending on the decisions reached. One challenge is the extent to which other DOD applications with which the Navy ERP must exchange data are compliant with the GIG. While traditional interfaces with systems that are not GIG compliant can be developed, these interfaces may suboptimize the benefits expected from the Navy ERP. The following is one example of the difficulties faced by the Navy ERP project. As mentioned previously, one system that will need to exchange data with the Navy ERP system is DTS. However, the DTS program office and the Navy ERP project office hold different views of how data should be exchanged between the two systems. The travel authorization process exemplifies these differences. DTS requires that funding information and the associated funds be provided to DTS in advance of a travel authorization being processed. In effect, DTS requires that the financial management systems set aside the funds necessary for DTS operations. Once a travel authorization is approved, DTS notifies the appropriate financial management system that an obligation has been incurred. The Navy ERP system, on the other hand, only envisions providing basic funding information to DTS in advance, and would delay providing the actual funds to DTS until they are needed in order to (1) maintain adequate funds control, (2) ensure that the funds under its control are not tied up by other systems, and (3) ensure that the proper accounting data are provided when an entry is made into its system. According to the Software Engineering Institute (SEI), a widely recognized model evaluating a system of systems interoperability is the Levels of Information System Interoperability. This model focuses on the increasing levels of sophistication of system interoperability. According to Navy ERP officials, the GIG and the ERP effort are expected to accomplish the highest level of this model—enterprise-based interoperability. In essence, systems that achieve this level of interoperability can provide multiple users access to complex data simultaneously, data and applications are fully shared and distributed, and data have a common interpretation regardless of format. This is in contrast to traditional interface strategies, such as the one used by DTS. The traditional approach is more aligned with the lowest level of the SEI model. Data exchanged at this level rely on electronic links that result in a simple electronic exchange of data. A broader challenge and risk that is out of the Navy ERP project’s control, but could significantly affect it, is DOD’s development of a BEA. As we recently reported, DOD’s BEA still lacks many of the key elements of a well-defined architecture and no basis exists for evaluating whether the Navy ERP will be aligned with the BEA and whether it would be a corporate solution for DOD in its “To Be” or target environment. An enterprise architecture consists of snapshots of the enterprise’s current environment and its target environment, as well as a capital investment road map for transitioning from the current to the target environment. The real value of an enterprise architecture is that it provides the necessary content for guiding and constraining system investments in a way that promotes interoperability and minimizes overlap and duplication. At this time, it is unknown what the target environment will be. Therefore, it is unknown what business processes, data standards, and technological standards the Navy ERP must align to, as well as what legacy systems will be transitioned into the target environment. The Navy ERP project team is cognizant of the BEA development and has attempted to align to prior versions of it. The project team analyzed the BEA requirements and architectural elements to assess Navy ERP’s compliance. The project team mapped the BEA requirements to the Navy ERP functional areas and the BEA operational activities to the Navy ERP’s business processes. The Navy ERP project team recognizes that architectures evolve over time, and analysis and assessments will continue as requirements are further developed and refined. The scope of the BEA and the development approach are being revised. As a result of the new focus, DOD is determining which products from prior releases of the BEA could be salvaged and used. Since the Navy ERP is being developed absent the benefit of an enterprise architecture, there is limited, if any, assurance that the Navy ERP will be compliant with the architecture once it becomes more robust in the future. Given this scenario, it is conceivable that the Navy ERP will be faced with rework in order to be compliant with the architecture, once it is defined, and as noted earlier, rework is expensive. At the extreme, the project could fail as the four pilots did. If rework is needed, the overall cost of the Navy ERP could exceed the Navy’s current estimate of $800 million. The ability of the Navy to effectively address its data conversion challenges will also be critical to the ultimate success of the ERP effort. A Joint Financial Management Improvement Program (JFMIP) white paper on financial system data conversion noted that data conversion (that is, converting data in a legacy system to a new system) was one of the critical tasks necessary to successfully implement a new financial system. The paper further pointed out that data conversion is one of the most frequently underestimated tasks. If data conversion is done right, the new system has a much greater opportunity for success. On the other hand, converting data incorrectly or entering unreliable data from a legacy system can have lengthy and long- term repercussions. The adage “garbage in, garbage out” best describes the adverse impact. Accurately converting data, such as account balances, from the pilots, as well as other systems that the Navy ERP is to replace, will be critical to the success of the Navy ERP. While data conversion is identified in the Navy ERP’s list of key risks, it is too early in the ERP system life cycle for the development of specific testing plans. However, our previous audits have shown that if data conversion is not done properly, it can negatively impact system efficiency. For example, the Army’s LMP data conversion effort has proven to be troublesome and continues to affect business operations. As noted in our recent report, when the Tobyhanna Army Depot converted ending balances from its legacy finance and accounting system—the Standard Depot System (SDS)—to LMP in July 2003, the June 30, 2003, ending account balances in SDS did not reconcile to the beginning account balances in LMP. Accurate account balances are important for producing reliable financial reports. Another example is LMP’s inability to transfer accurate unit-of-issue data— quantity of an item, such as each number, dozen, or gallon—from its legacy system to LMP. This resulted in excess amounts of material ordered. Similar problems could occur with the Navy ERP if data conversion issues are not adequately addressed. The agreements between the Navy ERP and the other systems owners, discussed previously, will be critical to effectively support Navy’s ERP data conversion efforts. Navy officials could take additional actions to improve management oversight of the Navy ERP effort. For example, we found that the Navy does not have a mechanism in place to capture the data that can be used to effectively assess the project management processes. Best business practices indicate that a key facet of project management and oversight is the ability to effectively monitor and evaluate a project’s actual performance, cost, and schedule against what was planned. Performing this critical task requires the accumulation of quantitative data or metrics that can be used to evaluate a project’s performance. This information is necessary to understand the risk being assumed and whether the project will provide the desired functionality. Lacking such data, the ERP program management team can only focus on the project schedule and whether activities have occurred as planned, not whether the activities achieved their objectives. Additionally, although the Navy ERP program has a verification and validation function, it relies on in-house subject matter experts and others who are not independent to provide an assessment of the Navy ERP to DOD and Navy management. The use of an IV&V function is recognized as a best business practice and can help provide reasonable assurance that the system satisfies its intended use and user needs. Further, an independent assessment of the Navy ERP would provide information to DOD and Navy management on the overall status of the project, including the effectiveness of the management processes being utilized and identification of any potential risks that could affect the project with respect to cost, schedule, and performance. Given DOD’s long-standing inability to implement business systems that provide users with the promised capabilities, an independent assessment of the ERP’s performance is warranted. The Navy’s ability to understand the impact of the weaknesses in its processes will be limited because it has not determined the quantitative data or metrics that can be used to assess the effectiveness of its project management processes. This information is necessary to understand the risk being assumed and whether the project will provide the desired functionality. The Navy has yet to establish the metrics that would allow it to fully understand (1) its capability to manage the entire ERP effort; (2) how its process problems will affect the ERP cost, schedule, and performance objectives; and (3) the corrective actions needed to reduce the risks associated with the problems identified. Experience has shown that such an approach leads to rework and thrashing instead of making real progress on the project. SEI has found that metrics identifying important events and trends are invaluable in guiding software organizations to informed decisions. Key SEI findings relating to metrics include the following. The success of any software organization depends on its ability to make predictions and commitments relative to the products it produces. Effective measurement processes help software groups succeed by enabling them to understand their capabilities so that they can develop achievable plans for producing and delivering products and services. Measurements enable people to detect trends and anticipate problems, thus providing better control of costs, reducing risks, improving quality, and ensuring that business objectives are achieved. The lack of quantitative data to assess a project has been a key concern in other projects we have reviewed. Without such a process, management can only focus on the project schedule and whether activities have occurred as planned, not whether the activities achieved their objectives. Further, such quantitative data can be used to hold the project team accountable for providing the promised capability. Defect-tracking systems are one means of capturing quantitative data that can be used to evaluate project efforts. Although HHS had a system that captured the reported defects, we found that the system was not updated in a timely manner with this critical information. More specifically, one of the users identified a process weakness related to grant accounting as a problem that will affect the deployment of HHS’s system—commonly referred to as a “showstopper.” However, this weakness did not appear in the defect-tracking system until about 1 month later. As a result, during this interval the HHS defect-tracking system did not accurately reflect the potential problems identified by users, and HHS management was unable to determine (1) how well the system was working and (2) the amount of work necessary to correct known defects. Such information is critical when assessing a project’s status. We have also reported that while NASA had a system that captured the defects that have been identified during testing, an analysis was not performed to determine the root causes of reported defects. A critical element in helping to ensure that a project meets its cost, schedule, and performance goals is to ensure that defects are minimized and corrected as early in the process as possible. Understanding the root cause of a defect is critical to evaluating the effectiveness of a process. For example, if a significant number of defects are caused by inadequate requirements definition, then the organization knows that the requirements management process it has adopted is not effectively reducing risks to acceptable levels. Analysis of the root causes of identified defects allows an organization to determine whether the requirements management approach it has adopted sufficiently reduces the risks of the system not meeting cost, schedule, and functionality goals to acceptable levels. Root-cause analysis would also help to quantify the risks inherent in the testing process that has been selected. Further, the Navy has not yet implemented an earned value management system, which is another metric that can be employed to better manage and oversee a system project. Both OMB and DOD require the use of an earned value management system. The earned value management system attempts to compare the value of work accomplished during a given period with the work scheduled for that period. By using the value of completed work as a basis for estimating the cost and time needed to complete the program, management can be alerted to potential problems early in the program. For example, if a task is expected to take 100 hours to complete and it is 50 percent complete, the earned value management system would compare the number of hours actually spent to complete the task to the number of hours expected for the amount of work performed. In this example, if the actual hours spent equaled 50 percent of the hours expected, then the earned value would show that the project’s resources were consistent with the estimate. Without an effective earned value management system, the Navy and DOD management have little assurance that they know the status of the various project deliverables in the context of progress and the cost incurred in completing each of the deliverables. In other words, an effective earned value management system would be able to provide quantitative data on the status of a given project deliverable, such as a data conversion program. Based on this information, Navy management would be able to determine whether the progress of the data conversion effort was within the expected parameters for completion. Management could then use this information to determine actions to take to mitigate risk and manage cost and schedule performance. According to Navy ERP officials, they intend to implement the earned value management system as part of the contract for the next phase of the project. The Navy has not established an IV&V function to provide an assessment of the Navy ERP to DOD and Navy management. Best business practices indicate that use of an IV&V function is a viable means to provide management reasonable assurance that the planned system satisfies its planned use and users. An effective IV&V review process would provide independent information to DOD and Navy management on the overall status of the project, including a discussion of any impacts or potential impacts to the project with respect to cost, schedule, and performance. These assessments involve reviewing project documentation, participating in meetings at all levels within the project, and providing periodic reports and recommendations, if deemed warranted, to senior management. The IV&V function should report on every facet of a system project such as: Testing program adequacy. Testing activities would be evaluated to ensure they are properly defined and developed in accordance with industry standard and best practices. Critical-path analysis. A critical path defines the series of tasks that must be finished in time for the entire project to finish on schedule. Each task on the critical path is a critical task. A critical-path analysis helps to identify the impact of various project events, such as delays in project deliverables, and ensures that the impact of such delays is clearly understood by all parties involved with the project. System strategy documents. Numerous system strategy documents that provide the foundation for the system development and operations are critical aspects of an effective system project. These documents are used for guidance in developing documents for articulating the plans and procedures used to implement a system. Examples of such documents include the Life-cycle Test Strategy, Interface Strategy, and Conversion Strategy. The IV&V reports should identify the project management weaknesses that increase the risks associated with the project to senior management so that they can be promptly addressed. The Navy ERP program’s approach to the verification and validation of its project management activities relies on in- house subject matter experts and others who work for the project team’s Quality Assurance leader. The results of these efforts are reported to the project manager. While various approaches can be used to perform this function, such as using the Navy’s approach or hiring a contractor to perform these activities, independence is a key component to successful verification and validation activities. The system developer and project management office may have vested interests and may not be objective in their self-assessments. Accordingly, performing verification and validation activities independently of the development and management functions helps to ensure that verification and validation activities are unbiased and based on objective evidence. The Navy’s adoption of verification and validation processes is a key component of its efforts to implement the disciplined processes necessary to manage this project. However, Navy and DOD management cannot obtain reasonable assurance that the processes have been effectively implemented since the present verification and validation efforts are not conducted by an independent party. In response to the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005, DOD has established a hierarchy of investment review boards from across the department to improve the control and accountability over business system investments. The boards are responsible for reviewing and approving investments to develop, operate, maintain, and modernize business systems for their respective business areas. The various boards are to report to the Defense Business Systems Management Committee (DBSMC), which is ultimately responsible for the review and approval of the department’s investments in its business systems. To help facilitate this oversight responsibility, the reports prepared by the IV&V function should be provided to the appropriate investment review board and the DBSMC to assist them in the decision- making process regarding the continued investment in the Navy ERP. The information in the reports should provide reasonable assurance that an appropriate rate of return is received on the hundreds of millions of dollars that will be invested over the next several years and the Navy ERP provides the promised capabilities. To help ensure that the Navy ERP achieves its cost, schedule, and performance goals, the investment review should employ an early warning system that enables it to take corrective action at the first sign of slippages. Effective project oversight requires having regular reviews of the project’s performance against stated expectations and ensuring that corrective actions for each underperforming project are documented, agreed to, implemented, and tracked until the desired outcome is achieved. The lack of management control and oversight and a poorly conceived concept resulted in the Navy largely wasting about $1 billion on four ERP system projects that had only a limited positive impact on the Navy’s ability to produce reliable, useful, and timely information to aid in its day-to-day operations. The Navy recognizes that it must have the appropriate management controls and processes in place to have reasonable assurance that the current effort will be successful. While the current requirements management effort is adhering to the disciplined processes, the overall effort is still in the early stages and numerous challenges and significant risks remain, such as validating data conversion efforts and developing numerous systems interfaces. Given that the current effort is not scheduled to be complete until 2011 and is currently estimated by the Navy to cost about $800 million, it is incumbent upon Navy and DOD management to provide the vigilant oversight that was lacking in the four pilots. Absent this oversight, the Navy and DOD run a higher risk than necessary of finding, as has been the case with many other DOD business systems efforts, that the system may cost more than anticipated, take longer to develop and implement, and does not provide the promised capabilities. In addition, attempting large-scale systems modernization programs without a well-defined architecture to guide and constrain business systems investments, which is the current DOD state, presents the risk of costly rework or even system failure once the enterprise architecture is fully defined. Considering (1) the large investment of time and money essentially wasted on the pilots and (2) the size, complexity, and estimated costs of the current ERP effort, the Navy can ill afford another business system failure. To improve the Navy’s and DOD’s oversight of the Navy ERP effort, we recommend that the Secretary of Defense direct the Secretary of the Navy to require that the Navy ERP Program Management Office (1) develop and implement the quantitative metrics needed to evaluate project performance and risks and use the quantitative metrics to assess progress and compliance with disciplined processes and (2) establish an IV&V function and direct that all IV&V reports be provided to Navy management and to the appropriate DOD investment review board, as well as the project management. Furthermore, given the uncertainty of the DOD business enterprise architecture, we recommend that the Secretary of Defense direct the DBSMC to institute semiannual reviews of the Navy ERP to ensure that the project continues to follows the disciplined processes and meets its intended costs, schedule, and performance goals. Particular attention should be directed towards system testing, data conversion, and development of the numerous system interfaces with the other Navy and DOD systems. We received written comments on a draft of this report from the Deputy Under Secretary of Defense (Financial Management) and the Deputy Under Secretary of Defense (Business Transformation), which are reprinted in appendix II. While DOD generally concurred with our recommendations, it took exception to our characterization that the pilots were failures and a waste of $1 billion. Regarding the recommendations, DOD agreed that it should develop and implement quantitative metrics that can be used to evaluate the Navy ERP and noted that it intends to have such metrics developed by December 2005. The department also agreed that the Navy ERP program management office should establish an IV&V function and noted that the IV&V team will report directly to the program manager. We continue to reiterate the need for the IV&V to be completely independent of the project. As noted in the report, performing IV&V activities independently of the development and management functions helps to ensure that the results are unbiased and based on objective evidence. Further, rather than having the IV&V reports provided directly to the appropriate DOD investment review boards as we recommended, DOD stated that the Navy management and/or the project management office shall inform the Office of the Under Secretary of Defense for Business Transformation of any significant IV&V results. We reiterate our support for the recommendation that the IV&V reports be provided to the appropriate investment review board so that it can determine whether any of the IV&V results are significant. Again, by providing the reports directly to the appropriate investment review board, we believe there would be added assurances that the results were objective and that the managers who will be responsible for authorizing future investments in the Navy ERP will have the information needed to make the most informed decision. With regard to the reviews by the DBSMC, DOD partially agreed. Rather than semiannual reviews by the DBSMC as we recommended, the department noted that the components (e.g., the Navy) would provide briefings on their overall efforts, initiatives, and systems during meetings with the DBSMC. Given the significance of the Navy ERP, in terms of dollars and its importance to the overall transformation of the department’s business operations, and the failure of the four ERP pilots, we continue to support more proactive semiannual reviews by the DBSMC. As noted in the report, the Navy’s initial estimate is that the ERP will cost at least $800 million, and given the department’s past difficulties in effectively developing and implementing business systems, substantive reviews by individuals outside of the program office that are focused just on the Navy ERP by the highest levels of management within the department are warranted. Further, we are concerned that the briefings contemplated to the DBSMC may not necessarily discuss the Navy ERP, nor provide the necessary detailed discussions to offer the requisite level of confidence and assurance that the project continues to follow disciplined processes with particular attention to numerous challenges, such as system interfaces and system testing. In commenting on the report, the department depicted the pilots in a much more positive light than we believe is merited. DOD pointed out that it viewed the pilots as successful, exceeding initial expectations, and forming the foundation upon which to build a Navy enterprise solution, and took exception to our characterization that the pilots were failures and largely a waste of $1 billion. As discussed in the report, the four pilots were narrow in scope, and were never intended to be a corporate solution for resolving any of the Navy’s long-standing financial and business management problems. We characterized the pilots as failures because the department spent $1 billion on systems that did not result in marked improvement in the Navy’s day-to-day operations. While there may have been marginal improvements, it is difficult to ascertain the sustained, long-term benefits that will be derived by the American taxpayers for the $1 billion. Additionally, the pilots present an excellent case study as to why the centralization of the business systems funding would be an appropriate course of action for the department, as we have previously recommended. Each Navy command was allowed to develop an independent solution that focused on its own parochial interest. There was no consideration as to how the separate efforts fit within an overall departmental framework, or, for that matter, even a Navy framework. As noted in table 2, the pilots performed many of the same functions and used the same software, but yet were not interoperable because of the various inconsistencies in the design and implementation. Because the department followed the status quo, the pilots, at best, provided the department with four more stovepiped systems that perform duplicate functions. Such investments are one reason why the department reported in February 2005 that it had 4,150 business systems. Further, in its comments the department noted one of the benefits of the pilots was that they “proved that the Navy could exploit commercial ERP tools without significant customization.” Based upon our review and during discussions with the program office, just the opposite occurred in the pilots. Many portions of the pilots’ COTS software were customized to accommodate the existing business processes, which negated the advantages of procuring a COTS package. Additionally, the department noted that one of the pilots—SMART, on which, as noted in our report, the Navy spent approximately $346 million through September 30, 2004—has already been retired. We continue to question the overall benefit that the Navy and the department derived from these four pilots and the $1 billion it spent. As agreed with your offices, unless you announce the contents of this report earlier, we will not distribute it until 30 days after its issuance date. At that time, we will send copies to the Chairmen and Ranking Minority Members, Senate Committee on Armed Services; Senate Committee on Homeland Security and Governmental Affairs; Subcommittee on Defense, Senate Committee on Appropriations; House Committee on Armed Services; House Committee on Government Reform; and Subcommittee on Defense, House Committee on Appropriations. We are also sending copies to the Under Secretary of Defense (Comptroller); the Under Secretary of Defense (Acquisition, Technology and Logistics); the Under Secretary of Defense (Personnel and Readiness); the Assistant Secretary of Defense (Networks and Information Integration); and the Director, Office of Management and Budget. Copies of this report will be made available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions on matters discussed in this report, please contact Gregory D. Kutz at (202) 512-9505 or [email protected] or Keith A. Rhodes at (202) 512- 6412 or [email protected]. Key contributors to this report are listed in appendix IV. Contact points for the Offices of Congressional Relations and Public Affairs are shown on the last page of the report. To obtain a historical perspective on the planning and costs of the Navy’s four Enterprise Resource Planning (ERP) pilot projects, and the decision to merge them into one program, we reviewed the Department of Defense’s (DOD) budget justification materials and other background information on the four pilot projects. We also reviewed Naval Audit Service reports on the pilots. In addition, we interviewed Navy ERP program management and DOD Chief Information Officer (CIO) officials and obtained informational briefings on the pilots. To determine if the Navy has identified lessons learned from the pilots, how they are being used, and the challenges that remain, we reviewed program documentation and interviewed Navy ERP program officials. Program documentation that we reviewed included concept of operations documentation, requirements documents, the testing strategy, and the test plan. In order to determine whether the stated requirements management processes were effectively implemented, we performed an in-depth review and analysis of seven requirements that relate to the Navy’s problem areas, such as financial reporting and asset management, and traced them through the various requirements documents. These requirements were selected in a manner that ensured that the requirements selected were included in the Navy’s Financial Improvement Plan. Our approach to validating the effectiveness of the requirements management process relied on a selection of seven requirements from different functional areas. From the finance area, we selected the requirement to provide reports of funds expended versus funds allocated. From the intermediate-level maintenance management area, we selected the requirement related to direct cost per job and forecasting accuracy. From the procurement area, we selected the requirement to enable monitoring and management of cost versus plan. In the plant supply functions area, we reviewed the requirement related to total material visibility and access of material held by the activity and the enterprise. From the wholesale supply functions area, we selected the requirements of in-transit losses/in-transit write-offs and total material visibility and access of material held by the activity and the enterprise. Additionally, we reviewed the requirement that the ERP be compliant with federal mandates and requirements and the U.S. Standard General Ledger. In order to provide reasonable assurance that our test results for the selected requirements reflected the same processes used to document all requirements, we did not notify the project office of the specific requirements we had chosen until the tests were conducted. Accordingly, the project office had to be able to respond to a large number of potential requests rather than prepare for the selected requirements in advance. Additionally, we obtained the list of systems the Navy ERP will interface with and interviewed selected officials responsible for these systems to determine what activities the Navy ERP program office is working with them on and what challenges remain. To determine if there are additional business practices that could be used to improve management oversight of the Navy ERP, we reviewed industry standards and best practices from the Institute of Electrical and Electronics Engineers, the Software Engineering Institute, the Joint Financial Management Improvement Program, GAO executive guides, and prior GAO reports. Given that the Navy ERP effort is still in the early stages of development, we did not evaluate all best practices. Rather, we concentrated on those that could have an immediate impact in improving management’s oversight. We interviewed Navy ERP program officials and requested program documentation to determine if the Navy ERP had addressed or had plans for addressing these industry standards and best practices. We did not verify the accuracy and completeness of the cost information provided by DOD for the four pilots or the Navy ERP effort. We conducted our work from August 2004 through June 2005 in accordance with U.S. generally accepted government auditing standards. We requested comments on a draft of this report from the Secretary of Defense or his designee. We received written comments on a draft of the report from the Deputy Under Secretary of Defense (Financial Management) and the Deputy Under Secretary of Defense (Business Transformation), which are reprinted in appendix II. Configuration Data Manager’s Database – Open Architecture Common Rates Computation System/Common Allowance Development System Department of the Navy Industrial Budget Information System Integrated Technical Item Management & Procurement Maintenance and Ship Work Planning Naval Aviation Logistic Command Management Information System (2 different versions) In addition to the contacts above, Darby Smith, Assistant Director; J. Christopher Martin, Senior Level Technologist; Francine DelVecchio; Kristi Karls; Jason Kelly; Mai Nguyen; and Philip Reiff made key contributions to this report. | The Department of Defense's (DOD) difficulty in implementing business systems that are efficient and effective continues despite the billions of dollars that it invests each year. For a decade now--since 1995--we have designated DOD's business systems modernization as "high-risk." GAO was asked to (1) provide a historical perspective on the planning and costs of the Navy's four Enterprise Resource Planning (ERP) pilot projects, and the decision to merge them into one program; (2) determine if the Navy has identified lessons from the pilots, how the lessons are being used, and challenges that remain; and (3) determine if there are additional best business practices that could be used to improve management oversight of the Navy ERP. The Navy invested approximately $1 billion in four ERP pilots without marked improvement in its day-to-day operations. The planning for the pilots started in 1998, with implementation beginning in fiscal year 2000. The four pilots were limited in scope and were not intended to be corporate solutions for any of the Navy's long-standing financial and business management problems. Furthermore, because of the various inconsistencies in the design and implementation of the pilots, they were not interoperable, even though they performed many of the same business functions. In short, the efforts were failures and $1 billion was largely wasted. Because the pilots would not meet its overall requirements, the Navy decided to start over and develop a new ERP system, under the leadership of a central program office. Using the lessons learned from the pilots, the current Navy ERP program office has so far been committed to the disciplined processes necessary to manage this effort. GAO found that, unlike other systems projects it has reviewed at DOD and other agencies, Navy ERP management is following an effective process for identifying and documenting requirements. The strong emphasis on requirements management, which was lacking in the previous efforts, is critical since requirements represent the essential blueprint that system developers and program managers use to design, develop, test, and implement a system and are key factors in projects that are considered successful. While the Navy ERP has the potential to address some of the Navy's financial management weaknesses, as currently planned, it will not provide an all-inclusive end-to-end corporate solution for the Navy. For example, the current scope of the ERP does not include the activities of the aviation and shipyard depots. Further, there are still significant challenges and risks ahead as the project moves forward, such as developing and implementing 44 system interfaces with other Navy and DOD systems and converting data from legacy systems into the ERP system. The project is in its early phases, with a current estimated completion date of 2011 at an estimated cost of $800 million. These estimates are subject to, and very likely will, change. Broader challenges, such as alignment with DOD's business enterprise architecture, which is not fully defined, also present a significant risk. Given DOD's past inability to implement business systems that provide the promised capability, continued close management oversight--by the Navy and DOD--will be critical. In this regard, the Navy does not have in place the structure to capture quantitative data that can be used to assess the effectiveness of the overall effort. Also, the Navy has not established an independent verification and validation (IV&V) function. Rather, the Navy is using in-house subject matter experts and others within the project. Industry best practices indicate that the IV&V activity should be independent of the project and report directly to agency management in order to provide added assurance that reported results on the project's status are unbiased. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Performance information can cover a range of related topics, including the results the federal government should seek to achieve, how those results will be achieved, how progress will be measured, and how results will be reported. To ensure that their performance information will be both useful and used by decision makers, agencies need to consider the differing information needs of various users—including those in Congress. As we have previously reported, agency performance information must meet Congress’s needs for completeness, accuracy, validity, timeliness, and ease of use to be useful for congressional decision making. As noted in our past work, several requirements put into place by GPRAMA could help address those needs. Completeness: Agencies often lack information on the effectiveness of programs; such information could help decision makers prioritize resources among programs. Our work on overlap and duplication has found crosscutting areas where performance information is limited or does not exist. The crosscutting planning and reporting requirements could lead to the development of performance information in areas that are currently incomplete. Accuracy and validity: Agencies are required to disclose more information about the accuracy and validity of their performance information in their performance plans and reports, including the sources for their data and actions to address limitations to the data. Timeliness and ease of use: Quarterly reporting for cross-agency and agency priority goals, along with posting much of the governmentwide and agency performance information on a central, governmentwide website, will provide more timely, accessible, and easy-to-use information. Section I describes how Members of Congress and their staffs can influence the development of performance information that meets congressional needs through consultations with executive branch agencies. This section identifies the requirements for these consultations, as well as the related congressional intent. In appendix I, the guide presents key questions that Members and congressional staff can ask as part of the consultation to ensure that agency performance information reflects congressional priorities. Finally, this section provides general approaches for ensuring consultations are successful. Section II illustrates how Congress can use performance information in its various legislative and oversight decision making activities to identify issues to address, measure the federal government’s progress towards addressing those issues, and when necessary, identify better strategies to address those issues. In this section, three case studies demonstrate how Congress has used performance information to inform its decision making in these different ways. This guide builds upon a large body of work we have conducted during the past two decades related to performance management in the federal government. This includes a number of products focused on enhancing the usefulness and use of performance information in congressional decision making, including our recent briefings to congressional staff on opportunities for Congress to address government performance issues. To identify how Congress can use the consultations required under GPRAMA, we identified requirements specified in the act, as well as the intent of these requirements as reported by the Senate Committee on Additionally, we identified Homeland Security and Governmental Affairs. illustrative questions Congress can ask during consultations and general approaches for successful consultations by reviewing our prior reports. We determined whether the approaches identified in a past report have remained relevant through several means. This included observing—at the invitation of congressional committee staff—a recent consultation with agency officials, and interviewing performance improvement and legislative affairs officials from several selected agencies about their past consultation experiences. We also gathered the views of current and former congressional and agency staff who participated in a forum held on July 5, 2011, by the National Academy of Public Administration on structuring collaboration between Congress and the executive branch on reporting, receiving, and using performance information. Our samples are nongeneralizeable given the methods used to select the congressional staff and agency officials involved in the consultation, interviews, and forum. To illustrate how Congress can use performance information produced by agencies to carry out its responsibilities, we selected three case studies from our prior work in which Congress played an active role in contributing to and overseeing agency efforts to improve performance. The case studies cover federal efforts to transform the processing of immigration benefits; coordinate U.S. efforts to address the global HIV/AIDS pandemic; and identify and address improper payments made by federal programs. In compiling these examples, we reviewed legislation, related congressional documents, and our related past work as well as that conducted by agency inspectors general. The case studies are based on publicly available information and are not intended to represent a complete list of all legislative and oversight activities conducted by Congress, but rather illustrate the types of activities that Congress has engaged in when using performance information. Although they focus on congressional activities, the progress and results achieved in these examples are due in part to the sustained attention and oversight of both the executive branch and Congress. We conducted our work from December 2010 to June 2012 in accordance with all sections of GAO’s Quality Assurance Framework that are relevant to our objectives. The framework requires that we plan and perform the engagement to obtain sufficient and appropriate evidence to meet our stated objectives and to discuss any limitations in our work. We believe that the information and data obtained, and the analysis conducted, provide a reasonable basis for any findings and conclusions in this product. GPRAMA requires OMB and agencies to consult with relevant committees, obtaining majority and minority views, about proposed goals at least once every 2 years. Specifically, OMB is required to consult with relevant committees with broad jurisdiction on crosscutting priority goals.Agencies are to consult with their relevant appropriations, authorization, and oversight committees when developing or making adjustments to their strategic plans and agency priority goals. The act also requires OMB, on a governmentwide website, and agencies, in their strategic plans, to describe how input provided during consultations was incorporated into the crosscutting priority goals and agency goals, respectively. performance and confirm that various committees are getting the types of performance information they need. In appendix I, we provide an illustrative list of questions that Members of Congress and their staffs can use during consultations to help ensure they provide input on key aspects of an agency’s performance information. Consultations provide an important opportunity for Congress and the executive branch to work together to ensure that agency missions are focused, goals are specific and results-oriented, and strategies and funding expectations are appropriate and reasonable. Willingness on the part of Congress and the administration to work together is a likely precondition to successful consultations. Discussions between the executive and legislative branches about performance are likely to underscore the competing and conflicting goals of many federal programs, as well as sometimes differing expectations between the branches. In addition, the historical relationships between an agency and Congress, the strategic issues facing the agency, and the degree of policy agreement or disagreement within Congress and between Congress and the administration on those issues will influence the way consultations are carried out. Although constructive communication across the branches of government can prove difficult, it is essential for sustaining federal performance improvement efforts. however, shared a view that consultations were to focus on strategic plans, not issues related to specific programs. As a result, these agency officials said they wanted discussions kept at a higher level—for example, on the agency’s mission and strategic goals. While neither of these views is necessarily right or wrong, these expressed differences highlight the need to create shared expectations about what will be covered during consultation sessions. Committee staff also told us that they encouraged agencies to provide them with relevant documents, including drafts of strategic plans, before the meetings. This enabled them to prepare questions and suggestions in advance. It also helped them focus on presentations and discussions taking place during the meetings by eliminating the need to read and respond to the documents at the same time. Another committee staff member stressed the importance of limiting the materials provided to those most critical, because congressional staff workloads constrain the time available to read such documents. Agency officials we spoke with echoed these views and stated that they provided congressional staff with draft materials in advance. For example, an official from one agency told us that he provided the agency’s strategic plan framework—its mission and goals—in lieu of the entire draft plan, which helped focus the consultation on overarching policy issues and the agency’s long-term goals. Successful consultations can create a basic understanding among stakeholders of the competing demands that confront most agencies and congressional staff, the limited resources available to them, and how those demands and resources require careful and continuous balancing. The requirement under GPRAMA for agencies to consult with Congress on the identification of priority goals presents an opportunity to develop such an understanding, especially given Congress’s constitutional role in setting national priorities and allocating the resources to achieve them. Several agency officials told us that feedback provided by Members and congressional staff on their agencies’ overarching goals and strategies helped them understand congressional priorities. us that consultations were most useful if they began early, during the drafting of the strategic plan. Congress also emphasized this point in the report accompanying GPRAMA. One agency official stated that getting congressional input at the beginning of the process gave the agency time to reconcile any differences in opinion on the agency’s direction. Agency officials also cautioned against waiting too long to consult with Congress. Officials from two agencies shared similar past experiences in which they provided a full draft strategic plan for congressional review, which was the extent of their consultation process. In both cases, the agencies received little or no feedback. As a result, both now consult earlier in the process. However, officials told us it was still important to share the draft plan for comment later in the process. Congressional staff and agency officials agreed that consultations should begin at the staff level—that is, without Members of Congress and agency top leadership—and involve agency officials with varying responsibilities. Both congressional committee staff and agency officials stressed the importance of having agency officials who can answer specific program- related questions attend, as well as those with authority to revise the agency’s plans. Examples include the performance improvement officer, staff from policy and program areas, and representatives from the legislative affairs office. According to committee staff members, the involvement of program officials is more likely to ensure that consultations are informative for both Congress and the agency. As the consultations proceed, the involvement of Members of Congress and agency leadership is important because they are ultimately responsible for making decisions about the agency’s strategic direction and funding. Officials from one agency told us that they thought the involvement of their top leaders in consultations with Members of Congress and their staff has helped their agency receive attention from Congress. For example, they shared that it has helped raise awareness and a better understanding in Congress of the challenges the agency faces. In addition to participating in consultations, congressional staff suggested several ways in which Members could be involved in agency performance management efforts. For example, Members could send letters to agencies posing questions on strategic plans and formally documenting their views on key issues. Another staff member said that hearings are important because not only do they result in Member involvement, but they also require the participation of senior agency leaders. Holding hearings following consultation sessions can create a public record of agreements reached during those sessions and provide oversight on agency performance planning efforts. Congressional staff and agency officials generally agreed that consultations ideally should be bipartisan and bicameral to help ensure involvement from all relevant parties. Although it may not always be possible, agency officials told us that they attempted to arrange such sessions, as appropriate. When these agencies were successful in doing so—as was the case with two agencies, according to officials with whom we spoke—it was with majority and minority staff from corresponding committees across the chambers (e.g., appropriations subcommittees). In addition, to the extent feasible, consultations should be held jointly with relevant authorizing, appropriations, budget, and oversight committees. Committee staff recognized that, due to sometimes overlapping jurisdictions, obtaining the involvement of all interested congressional committees in a coordinated approach can be challenging. However, the often overlapping or fragmented nature of federal programs—a problem that has been extensively documented in our work—underscores the importance of a coordinated consultation process. For example, in an attempt to address this issue during initial implementation of GPRA in 1997, the House leadership formed teams of congressional staff from different committees to have a direct role in the consultation process. Performance information can be used to inform congressional decisions about authorizing or reauthorizing federal programs, provisions in the tax code, and other activities; appropriating funds; and developing budget resolutions. In this section, three case studies demonstrate how Congress has used performance information to inform its decision making 1. to identify issues that the federal government should address; 2. to measure the federal government’s progress toward addressing 3. when necessary, to identify better strategies to address the issues. The case studies cover efforts to transform the processing of immigration benefits; coordinate U.S. efforts to address the global HIV/AIDS pandemic; and identify and address improper payments made by federal programs. These case studies—as well as those included in our recent briefingsalso demonstrate how Congress can assist agencies in developing and achieving performance goals. For example, in many of these examples, Congress set clear expectations for agency performance, required routine reporting on progress, and provided consistent oversight over a sustained period of time. When an agency fell short of meeting established goals, Congress examined whether additional authority would help the agency meet the goal and, when needed, provided such authority. In one case study, Congress required an agency to develop and submit a strategic plan prior to receiving a portion of its appropriations. Members of Congress, congressional committees and staff can use performance information about the outcomes of federal programs to identify pressing issues for the federal government to address. The transformation of the United States Citizenship and Immigration Services’s (USCIS) benefits processing illustrates how information on an agency’s performance helped Congress identify issues to address and act upon. USCIS, a component of the Department of Homeland Security (DHS), adjudicates benefits requests and petitions for individuals seeking to become citizens of the United States or to study, live, or work in this country. Our past work, and that of the DHS Office of Inspector General (OIG), has identified performance challenges USCIS faces in processing benefits. For example, a 2005 DHS OIG report found that USCIS’s ability to annually process more than 7 million benefit applications has been hindered by inefficient, paper-based processes, resulting in a backlog that peaked in 2004 at more than 3.8 million cases. Recognizing that dependence on paper files makes it difficult to process immigration benefits efficiently, USCIS began a transformation initiative in 2005 to transition to electronic processing to enhance customer service, improve efficiency, and prevent future backlogs of immigration benefit applications. Recognizing the importance of this transformation initiative, Congress provided USCIS with $181,990,000 in appropriations in fiscal year 2007, which included, according to the Conference Committee report, $47 million to upgrade its information technology and business systems. However, before USCIS could obligate this funding, Congress directed the agency to submit a strategic transformation plan and expenditure plan with details on expected performance and deliverables. Congress also directed us to review and report to the appropriations committees on the plans. According to a House Committee on Appropriations report that accompanied the act, the committee wanted to ensure that USCIS’s transformation efforts were consistent with best practices. In May 2007, USCIS submitted its Transformation Program Strategic Plan and Expenditure Plan to the appropriations committees. We briefed the committees in June and July 2007 on our review, which found that USCIS’s plans had mixed success in addressing key practices for organizational transformations. As illustrated in table 1, more than half of the key practices (five out of nine) were either partially or not addressed. Department of Homeland Security Appropriations Act, 2007, Pub. L. No. 109-295, 120 Stat. 1355, 1374 (2006). Our report noted that more attention was needed in a number of management-related activities, including performance measurement. USCIS took several actions to ensure top leadership drove the transformation, such as establishing a Transformation Program Office that directly reports to the USCIS Deputy Director. USCIS established a mission, vision, and strategic goals in its Strategic Plan that could have been used to guide the transformation. USCIS identified priorities and a succinct set of core values with which to guide the transformation and help build a new agencywide culture. USCIS established high-level implementation goals and a timeline for the transformation, but had not shared them with all employees and stakeholders, a step that would have helped build momentum and illustrate progress. USCIS dedicated an implementation team to manage the transformation and involved stakeholders on an as-needed basis; however, its Federal Stakeholder Advisory Board had not yet convened. USCIS was not using its performance management system to define expectations and hold employees accountable for the transformation. USCIS completed an initial communication strategy and began exchanging information with employees and stakeholders. However, the strategy for 2008 and beyond was not clearly defined, and lacked an effective approach for communicating with stakeholders. USCIS took several steps to involve employees in the transformation, and was planning for additional involvement as the transformation progressed. USCIS was conducting benchmarking research to identify leading business processes, but its plans did not adequately consider information technology management controls, strategic human capital management, and performance measurement to build a world-class organization. Since then, Congress has continued to provide oversight on, and raise concerns about the performance of, USCIS’s transformation initiative, which is ongoing. For example, several committees held at least six hearings related to USCIS’s transformation plan from 2007 to 2011, including appropriations hearings in 2008 and 2010 during which committee members expressed concerns about USCIS not meeting its goals for timely processing of applications and implementing its transformation plan. In February 2011, the Ranking Member of the Senate Committee on the Judiciary—which has jurisdiction over immigration issues—wrote a letter to the Director of USCIS expressing concern over reported delays and cost increases for completing the transformation and requested a briefing on the effort. In addition, in response to congressional requests, we have reviewed aspects of USCIS’s implementation of its transformation plan. For example, in September 2011 we reported that while USCIS had improved the quality and efficiency of the immigration benefit administration process and strengthened its immigration fraud detection and deterrence efforts, the agency’s efforts to modernize its benefit processing infrastructure and business practices missed planned milestones by more than two years. In November 2011, we reported that a lack of defined requirements, an acquisition strategy, and associated cost parameters contributed to the delays and noted that consistent adherence to DHS’s acquisition policy could help improve USCIS’ transformation program outcomes. In particular, we reported that USCIS was managing the program without specific acquisition management controls, such as reliable schedules, which detail work to be performed by both the government and its contractor over the expected life of the program. As a result, we found that USCIS does not have reasonable assurance that it can meet its future milestones. We made three recommendations aimed at ensuring that USCIS takes a comprehensive and cost-effective approach to the development and deployment of transformation efforts to meet the agency’s goals of improved adjudications and customer services processes. In its comments on our report, DHS reported that USCIS is taking action to address each recommendation. After identifying issues, Congress has established expectations for the level of performance to be achieved by federal agencies and programs, and regular reporting on results. As highlighted in our case study on efforts to address the global HIV/AIDS pandemic, setting clear goals— with target levels of performance and timeframes for achieving them— and expectations for periodic progress reports helped Congress sustain attention on improving results over the course of several years. In 2003, Congress found that HIV/AIDS had reached pandemic proportions during the previous 20 years, and that by the end of 2002, an estimated 42 million individuals were infected with HIV or living with AIDS. In addition, Congress found that the U.S. government had the capacity to lead and enhance the effectiveness of the international community’s response, but it required strong coordination among various agencies to ensure the effective and efficient use of financial and technical resources to provide international HIV/AIDS assistance. However, at that time, the U.S. government funded separate HIV/AIDS foreign assistance programs in several agencies as well as directly to the Global Fund to Fight AIDS, Tuberculosis and Malaria. To address these issues, Congress authorized a 5-year initiative—also known as the President’s Emergency Plan for AIDS Relief, or PEPFAR—to establish a comprehensive, integrated 5-year strategy to fight global HIV/AIDS. Congress authorized up to $15 billion in funding and created a streamlined U.S. approach to global HIV/AIDS treatment by coordinating and deploying federal agencies and resources through a single entity: the Office of the U.S. Global AIDS Coordinator (OGAC) within the Department of State. “So far we can say that this critically important legislation is working. It has supplied lifesaving antiretroviral therapy to more than 800,000 adults and children, provided invaluable testing and counseling for 19 million, supported essential services to prevent mother-to-child transmission to more than 6 million women and served 4.5 million people with desperately needed care and support. These numbers represent solid progress toward the program’s stated 5-year goal of 5 million treated with antiretrovirals, 7 million infections averted and care provided to 10 million patients.” PEPFAR: An Assessment of Progress and Challenges, Hearing before the H. Comm. on Foreign Affairs, 110th Cong. 2 (2007) (statement by Chairman Tom Lantos). Pub. L. No. 110-293, 122 Stat. 2918 (2008). support care for 12 million people infected with or affected by HIV/AIDS, including 5 million orphans and vulnerable children affected by HIV/AIDS. Since then, Congress has continued to monitor progress towards the updated goals. For example, in September 2010, the House Committee on Foreign Affairs held another hearing assessing PEPFAR’s progress and challenges in addressing the global HIV/AIDS pandemic. In addition, reviewing various aspects of PEPFAR— we have issued several reports such as the selection and oversight of organizations implementing PEPFAR activities and global HIV/AIDS program monitoring—in response to directives contained in the 2008 Leadership Act and the Consolidated Appropriations Act of 2008. Finally, Members of Congress, congressional committees, and staff can assess whether existing strategies are the most efficient and effective means for agencies to meet their goals. Analyzing existing performance information can help identify new strategies that could lead to improved results. As the case study on addressing improper payments shows, when it is clear that agencies are not meeting performance expectations, Congress has provided agencies with additional authorities and required alternate approaches to achieve results. occur. Since fiscal year 2000, we have issued a number of reports and testimonies, at the request of Congress, aimed at raising the level of attention and corrective actions surrounding improper payments. Our work has highlighted long-standing, widespread, and significant problems with improper payments across the federal government. For example, we reported in 2000 that the full extent of improper payments governmentwide remained largely unknown, hampering efforts to reduce such payments since many agencies did not attempt to identify or estimate improper payments while others only did so for certain programs. To help address these issues, Congress passed the Improper Payments Information Act of 2002 (IPIA), which requires executive branch agencies to (1) identify programs and activities susceptible to significant improper payments, (2) estimate the amount of improper payments for those programs and activities, and (3) report these estimates along with actions taken to reduce improper payments for programs with estimates that exceed $10 million. implementation to date.rose substantially from 2004 to 2008, the first 5 fiscal years of IPIA implementation, we reported that this was a positive step in improving transparency over the full magnitude of the federal government’s improper payments as more agencies and more programs reported estimates over time (see figure 1). In addition, of the 35 agency programs that reported estimates in each of the 5 fiscal years, 24 of them (or about 69 percent) reported reduced error rates when comparing 2008 rates to those in 2004. However, we identified several major challenges that remained in meeting the goals of IPIA, including that Although reported improper payment estimates the total estimates reported in fiscal year 2008 did not reflect the full scope of improper payments across federal agencies; noncompliance issues with IPIA implementation existed; and agencies continued to face challenges in the design or implementation of internal controls to identify and prevent improper payments. We also noted that separate assessments by agency auditors, such as GAO or inspectors general, would help to reliably determine the scope of any deficiencies in, and provide a valuable independent validation of, agencies’ efforts to implement IPIA. million in annual program outlays. In the first year of IPERA implementation, fiscal year 2011, 17 agencies reported an estimated $115 billion in improper payments for 79 programs—a decrease of about $5 billion from revised fiscal year 2010 estimates. In addition, OMB reported that agencies recaptured about $1.25 billion in improper payments to contractors, vendors, and health care providers in fiscal year 2011. As we recently reported, OMB also identified improper payments as an area covered by one of 14 interim crosscutting priority goals in the President’s Budget for fiscal year 2013. The particular goal is to reduce the governmentwide improper payment rate by at least 2 percentage points by fiscal year 2014, from 5.42 percent in 2009, and applies to all federal programs that annually report improper payment estimates. We have previously reported that consultations provide an opportunity for Congress to influence 1. what results agencies should seek to achieve (long-term and annual 2. how those results will be achieved, including how an agency’s efforts are aligned and coordinated with other related efforts (strategies and resources); 3. how to measure progress given the complexity of federal programs and activities (performance measures); and 4. how to report on results (reporting). Table 2 presents examples of questions that Members of Congress and their staffs can ask on strategic plans and related performance issues— during consultations with agencies or in other venues such as hearings— to help ensure that the associated performance information meets their needs and expectations. In addition to the above contact, Elizabeth Curda, Assistant Director; Benjamin T. Licht; and Megan M. Taylor made significant contributions to this guide. Todd M. Anderson, Kathryn Bernet, Carla Brown, Gerard Burke, Virginia A. Chanley, Beryl H. Davis, Rebecca Gambler, David Gootnick, Nancy Kingsbury, Susan Offutt, James Michels, Stephanie Shipman, Katherine Siggerud, Bernice Steinhardt, Andrew J. Stephens, Jack Warner, and Dan Webb also made key contributions. Managing for Results: Opportunities for Congress to Address Government Performance Issues. GAO-12-215R. Washington, D.C.: December 9, 2011. Managing for Results: GPRA Modernization Act Implementation Provides Important Opportunities to Address Government Challenges. GAO-11-617T. Washington, D.C.: May 10, 2011. Government Performance: GPRA Modernization Act Provides Opportunities to Help Address Fiscal, Performance, and Management Challenges. GAO-11-466T. Washington, D.C.: March 16, 2011. Government Performance: Strategies for Building a Results-Oriented and Collaborative Culture in the Federal Government. GAO-09-1011T. Washington, D.C.: September 24, 2009. Government Performance: Lessons Learned for the Next Administration on Using Performance Information to Improve Results. GAO-08-1026T. Washington, D.C.: July 24, 2008. Congressional Oversight: FAA Case Study Shows How Agency Performance, Budgeting, and Financial Information Could Enhance Oversight. GAO-06-378. Washington, D.C.: March 8, 2006. Performance Budgeting: PART Focuses Attention on Program Performance, but More Can Be Done to Engage Congress. GAO-06-28. Washington, D.C.: October 28, 2005. Managing for Results: Enhancing Agency Use of Performance Information for Management Decision Making, GAO-05-927. Washington, D.C.: September 9, 2005. Results-Oriented Government: GPRA Has Established a Solid Foundation for Achieving Greater Results. GAO-04-38. Washington, D.C.: March 10, 2004. Managing for Results: Views on Ensuring the Usefulness of Agency Performance Information to Congress. GAO/GGD-00-35. Washington, D.C.: January 26, 2000. Managing for Results: Enhancing the Usefulness of GPRA Consultations Between the Executive Branch and Congress. GAO/T-GGD-97-56. Washington, D.C.: March 10, 1997. Managing for Results: Using GPRA to Assist Congressional and Executive Branch Decisionmaking. GAO/T-GGD-97-43. Washington, D.C.: February 12, 1997. Managing for Results: Achieving GPRA’s Objectives Requires Strong Congressional Role. GAO/T-GGD-96-79. Washington, D.C.: March 6, 1996. Program Evaluation: Improving the Flow of Information to the Congress. GAO/PEMD-95-1. Washington, D.C.: January 30, 1995. | Many of the meaningful results that the federal government seeks to achieve, such as those related to protecting food and agriculture, providing homeland security, and ensuring a well-trained and educated workforce, require the coordinated efforts of more than one federal agency. As Congress creates, modifies, and funds federal programs and activities, it needs pertinent and reliable information to adequately assess agencies progress in meeting established performance goals, ensure accountability for results, and understand how individual programs and activities fit within a broader portfolio of federal efforts. However, as our annual reports on duplication, overlap, and fragmentation in the federal government have recently highlighted, there are a number of crosscutting areas where performance information is limited or does not exist. Even in instances where agencies produce a great deal of performance information, our past work has shown that it does not always reach the interested parties in Congress, and when it does, the information may not be timely or presented in a manner that is useful for congressional decision making. To help ensure that executive branch performance information is useful to Congress for its decision making, congressional involvement on what to measure and how to present this information is critical. Recognizing this, Congress updated the statutory framework for performance management in the federal government, the Government Performance and Results Act of 1993 (GPRA), with the GPRA Modernization Act of 2010 (GPRAMA), which significantly enhances the requirements for agencies to consult with Congress when establishing or adjusting governmentwide and agency goals. Specifically, the Office of Management and Budget (OMB) is required to consult with relevant committees with broad jurisdiction on crosscutting priority goals. Agencies are to consult with their relevant appropriations, authorization, and oversight committees when developing or making adjustments to their strategic plans and agency priority goals. This guide, prepared at Congressional request, is intended to assist Members of Congress and their staffs in (1) ensuring the consultations required under GPRAMA are useful to the Congress and (2) using performance information produced by executive branch agencies in carrying out various congressional decision-making responsibilities, such as authorizing programs or provisions in the tax code, making appropriations, developing budgets, and providing oversight. GPRAMA requires OMB and agencies to consult with relevant committees, obtaining majority and minority views, about proposed goals at least once every 2 years. Specifically, OMB is required to consult with relevant committees with broad jurisdiction on crosscutting priority goals. Agencies are to consult with their relevant appropriations, authorization, and oversight committees when developing or making adjustments to their strategic plans and agency priority goals. The act also requires OMB, on a governmentwide website, and agencies, in their strategic plans, to describe how input provided during consultations was incorporated into the crosscutting priority goals and agency goals, respectively. According to the Senate report accompanying the act, consultations are intended to strengthen collaboration between Congress and federal agencies to improve government performance. Successful strategic planning requires the involvement of key stakeholders, which can help build consensus. As the committee report notes, the consultation process was established so agencies could take congressional views into account as appropriate. If an agency waits to consult with relevant congressional stakeholders until a strategic plan has been substantially drafted and fully vetted within the executive branch, it foregoes important opportunities to learn about and address early on specific concerns that will be critical to successful implementation. The committee, therefore, emphasized that consultations should take place during the development of a strategic plan, not after. In addition, the requirement for consultations at least once every 2 years is intended to ensure that each Congress has input on agency goals, objectives, strategies, and performance measures. Consultations also provide agencies with opportunities to share information on their performance and confirm that various committees are getting the types of performance information they need. Performance information can be used to inform congressional decisions about authorizing or reauthorizing federal programs, provisions in the tax code, and other activities; appropriating funds; and developing budget resolutions. After identifying issues, Congress has established expectations for the level of performance to be achieved by federal agencies and programs, and regular reporting on results. As highlighted in our case study on efforts to address the global HIV/AIDS pandemic, setting clear goalswith target levels of performance and timeframes for achieving themand expectations for periodic progress reports helped Congress sustain attention on improving results over the course of several years. Finally, Members of Congress, congressional committees, and staff can assess whether existing strategies are the most efficient and effective means for agencies to meet their goals. Analyzing existing performance information can help identify new strategies that could lead to improved results. As the case study on addressing improper payments shows, when it is clear that agencies are not meeting performance expectations, Congress has provided agencies with additional authorities and required alternate approaches to achieve results. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The size and composition of the nuclear stockpile have evolved as a consequence of the global security environment and the national security needs of the United States. According to NNSA’s Stockpile Stewardship and Management Plan for Fiscal Year 2016, the stockpile peaked at 31,255 weapons in 1967, and in September 2013, the stockpile consisted of 4,804 weapons—the smallest since the Eisenhower Administration. The New Strategic Arms Reduction Treaty between the United States and Russia, which entered into force on February 5, 2011, is to reduce the operationally deployed stockpile even further by 2018. Weapons that were originally produced on average 25 to 30 years ago are now well past their original design life of approximately 15 to 20 years. In addition, no new nuclear weapons have been developed since the closing days of the Cold War. Before the end of the U.S. underground nuclear testing program in 1992, developing and maintaining the nuclear stockpile were largely accomplished by a continual cycle of weapon design, weapon testing, and the incorporation of lessons learned in the next design. A critical step in this process was conducting underground nuclear explosive tests. Since 1992, the United States has observed a self-imposed moratorium on nuclear explosive testing and has, instead, relied on a program of nonnuclear testing and modeling to ensure the reliability, safety, and effectiveness of the stockpile. While the United States maintains the policy of no new nuclear testing or weapon designs, and the stockpile is reduced in absolute numbers, confidence in the existing stockpile and the effectiveness of the deterrent must remain high to meet U.S. national security needs. For this reason, the United States is continuing to modernize the existing stockpile through life-extension programs (LEP). LEPs are modifications that refurbish warheads or bombs by replacing aged components with the intent of extending the service life of weapons by 20 to 30 years, while increasing safety, improving security, and addressing defects. NNSA’s Office of Defense Programs is responsible for the manufacture, maintenance, refurbishment, surveillance, and dismantlement of nuclear weapons. Most modern nuclear weapons consist of three sets of materials and components—a primary, a secondary, and a set of nonnuclear components. When detonated, the primary and secondary components, which together are referred to as the weapon’s “nuclear explosive package,” produce the weapon’s explosive force, or “yield.” Some nonnuclear components—collectively called “limited-life components”—have shorter service lives than the weapons themselves and, therefore, must be periodically replaced. There are two key efforts in the stockpile surveillance program—Core Surveillance and the Enhanced Surveillance Program. NNSA’s Core Surveillance, in one form or the other, has been in place for nearly 60 years. In contrast, the Enhanced Surveillance Program was established in the mid-1990s to assist in surveillance and evaluation of the stockpile primarily by identifying aging signs, developing aging models to predict the impact of aging on the stockpile, and developing diagnostic tools. Since the late 1950s, Core Surveillance has focused on sampling and testing the nuclear stockpile to provide continuing confidence in its reliability. Core Surveillance conducts tests that provide current information—essentially a snapshot of the current condition of the stockpile—for the annual assessment of the stockpile. According to NNSA officials, Core Surveillance focuses mainly on identifying the “birth defects” of a system—the manufacturing defects in current components and materials. Under Core Surveillance, NNSA’s national security laboratories and production plants are to evaluate the current state of weapons and weapon components for the attributes of function, condition, material properties, and chemical composition through the following activities: System-Level Laboratory Testing. For such tests, units from each type of stockpiled weapon are chosen annually, either randomly or specifically, and sent to the Pantex Plant in Texas for disassembly, inspection, reconfiguration, and testing by the national security laboratories. System-Level Flight Testing. These tests drop or launch a weapon with its nuclear material removed. NNSA coordinates flight testing with DOD, which is responsible for providing the military assets (e.g., aircraft and missiles) needed to drop or launch a weapon. Component and Material Testing. These tests are conducted on nuclear and nonnuclear components and materials by both the national security laboratories and the production plants that manufactured them. Organizationally, Core Surveillance is part of NNSA’s Directed Stockpile Work Program. This program also conducts, among other things, maintenance of active weapons in the stockpile, LEPs, and dismantlement and disposition of retired weapons. Core Surveillance activities were funded at approximately $217 million in fiscal year 2016. According to NNSA documents, through scientific and engineering efforts, the Enhanced Surveillance Program enables the agency to better predict where defects might occur in the future to help determine useful lifetimes of weapons and certain key components, such as switches or detonators, and to help plan when replacement is needed. The creation of the Enhanced Surveillance Program in the mid-1990s came at a time when concerns were growing (1) with an aging stockpile and (2) that Core Surveillance tended to produce diminishing returns. More specifically, in a 2006 study, NNSA and the Sandia National Laboratories found that as more is known about manufacturing and current aging defects—the focus of Core Surveillance—fewer and fewer manufacturing-related defects are discovered. This 2006 study suggested a different approach to surveillance for aging weapons. According to NNSA officials, the Enhanced Surveillance Program conducts three key activities: Aging studies. Enhanced Surveillance Program aging studies support decisions on when and whether to reuse or replace weapons components and materials. As part of these studies the program identifies and develops new materials and components that can substitute for materials that are no longer available; identifies inadequately performing components; and assesses performance of existing components to assist in weapons life-extension decisions. For example, to assist in making decisions on the life extension of weapons, the Enhanced Surveillance Program assessed the feasibility of reusing certain components. Specifically, according to NNSA documents, in fiscal year 2014, the Enhanced Surveillance Program validated the reuse of a battery for one weapon through aging studies, resulting in eliminating the need and cost to redesign the part. In another example, according to NNSA officials, Enhanced Surveillance Program aging models made it possible to certify the potential reuse of a key part of the W80 warhead to allow life extension of that weapon. NNSA also uses information from these aging studies in LEPs to guide decisions on when future weapons modifications, alterations, and life extensions need to occur to reduce the risk of potential problems from future defects. Finally, NNSA uses information from the aging studies in the national security laboratory directors’ annual assessment of the condition of the stockpile. Computational modeling. On the basis of its aging studies and other data, the Enhanced Surveillance Program develops computational models to predict the impacts of aging on weapons components and materials. According to the Enhanced Surveillance Program’s federal program manager, computational predictive models primarily benefit weapons systems managers at the three nuclear security laboratories. The federal program manager noted that the models allow a projection of the future performance of the systems and anticipate failures with sufficient time to correct them. Diagnostic tool development. The Enhanced Surveillance Program develops diagnostic tools to support Core Surveillance and allow the evaluation of weapons without the need to dismantle and destroy them. This is important since new weapons are not being produced. One diagnostic tool developed by the program was the high-resolution computed tomography image analysis tool for a particular nuclear component, implemented in fiscal year 2009. NNSA officials said this diagnostic tool has enhanced the ability to identify potential defects or anomalies without the need to dismantle or destroy the component. Organizationally, the Enhanced Surveillance Program is a part of NNSA’s Engineering Program, which is part of NNSA’s broader research, development, test, and evaluation (RDT&E) program. The Engineering Program creates and develops tools and capabilities to support efforts to ensure weapons are safe and reliable. NNSA’s total RDT&E budget allocation for fiscal year 2016 is $1.8 billion; the Enhanced Surveillance Program budget allocation for fiscal year 2016 is approximately $39 million. According to agency documents, because of long-standing concerns over the stockpile surveillance program, NNSA launched its 2007 initiative to, among other things, better integrate stockpile surveillance program activities. The concerns date back to the mid-1990s. For example, our July 1996 report on the surveillance program found the agency was behind in conducting surveillance tests and did not have written plans for addressing the backlog. A January 2001 internal NNSA review of the surveillance program made several recommendations to improve surveillance, including addressing the selection and testing approach for weapons and components, developing new tools to allow for nondestructive testing of the stockpile, improving aging and performance models, and achieving closer coordination and integration of Core Surveillance and the Enhanced Surveillance Program. Further, an April 2004 review of the Enhanced Surveillance Program by DOE’s Office of Inspector General found that NNSA experienced delays in completing some Enhanced Surveillance Program milestones and was at risk of not meeting future milestones. The report noted that such delays could result in NNSA’s being unprepared to identify age-related defects in weapons and impact the agency’s ability to annually assess the condition of the stockpile. Finally, an October 2006 DOE Office of Inspector General report found that NNSA had not eliminated its surveillance testing backlog. Faced with this criticism, a growing backlog of Core Surveillance’s traditional surveillance testing, budgetary pressures, and an aging stockpile, NNSA developed its 2007 initiative. According to its project plan, the 2007 initiative sought to establish clear requirements for determining stockpile surveillance needs and to integrate all surveillance activities—to include Core Surveillance and the Enhanced Surveillance Program—through a strengthened management structure. In addition, NNSA sought to create a more flexible, cost-effective, and efficient surveillance program by, among other things, dismantling fewer weapons and increasing the understanding of the impact of aging on weapons, components, and materials by being able to predict the effects of aging activities. According to an NNSA official who previously oversaw surveillance activities, because of the nature of its work, the Enhanced Surveillance Program was intended to be a key part of this transformation effort. More specifically, according to the 2007 initiative project plan, one proposal was to increase evaluations of aging effects on nonnuclear weapons components and materials. The 2007 initiative project plan noted that more than 100 such evaluations would be undertaken at the Sandia National Laboratories in fiscal year 2007, the first year of the initiative’s implementation. In addition, the 2007 initiative project plan stated that the Enhanced Surveillance Program would continue to assess the viability of diagnostic tools in support of Core Surveillance. NNSA implemented some aspects of its 2007 initiative but did not fully implement its envisioned role for the Enhanced Surveillance Program and has not developed a long-term strategy for the program. NNSA has substantially reduced the program’s funding since 2007 and recently refocused some of its RDT&E programs on multiple weapon life- extension efforts and supporting efforts. A February 2010 internal NNSA review noted that NNSA had implemented some important aspects of the 2007 initiative. For example, NNSA updated guidance laying out processes for identifying surveillance requirements. In addition, the agency had implemented a governance structure consisting of working committees to harmonize requirements between Core Surveillance and the Enhanced Surveillance Program. Furthermore, the agency had created a senior-level position to lead the overall surveillance effort and better integrate Core Surveillance and the Enhanced Surveillance Program. However, according to NNSA documents and officials, the agency did not fully implement its envisioned role for the Enhanced Surveillance Program. Instead of increasing the role of the program by conducting the range of aging studies as envisioned, NNSA budgeted less funding to it, delayed some planned work, and transferred work to other NNSA programs. The amount of funding the agency budgeted to the Enhanced Surveillance Program declined from $87 million in fiscal year 2007—the first year of the 2007 initiative’s implementation—to $79 million in fiscal year 2008. NNSA has continued to budget less funding to the Enhanced Surveillance Program. Funding dropped to approximately $38 million in fiscal year 2015, a reduction of more than 50 percent from fiscal year 2007. While the Enhanced Surveillance Program has experienced reductions in funding and scope since the 2007 initiative, Core Surveillance funding has generally kept pace with required stockpile testing, according to an NNSA official. After an initial funding reduction from $195 million in fiscal year 2007 to $158 million in fiscal year 2009, NNSA increased the budgeted funding to Core Surveillance in 2010 and has stabilized its funding levels since then. Agency officials said they believe the Core Surveillance program is now generally stable. Figure 1 shows funding levels for the two programs for fiscal years 2007 through 2015. NNSA also delayed some key Enhanced Surveillance Program activities during this time. For example, NNSA did not complete the proposed evaluations of the effects of aging on nonnuclear components and materials that were to be largely carried out at the Sandia National Laboratories. These evaluations—which NNSA viewed as an important part of the Enhanced Surveillance Program when it was being managed as a campaign, according to an NNSA official—were initiated in fiscal year 2007 and originally estimated to be completed by 2012. However, a 2010 NNSA review concluded these evaluations had not occurred. According to a contract representative at the Sandia National Laboratories overseeing Enhanced Surveillance Program work, these evaluations no longer have an estimated time frame for completion and their systematic completion, as was once envisioned, is no longer a program goal. Furthermore, while the program has developed some diagnostic tools to aid Core Surveillance, such as high-resolution computed tomography image analysis, NNSA officials and the NNSA fiscal year 2016 budget request said that other efforts to develop diagnostic tools had been deferred because of lack of funding. In addition, NNSA transferred some Enhanced Surveillance Program work to other programs. For example, NNSA transferred experiments (and related funding) to measure aging effects and to provide lifetime assessments on the plutonium pits—a key nuclear weapons component—from the Enhanced Surveillance Program to NNSA’s Science Campaign in fiscal year 2009. According to the Enhanced Surveillance Program’s federal program manager, NNSA has budgeted reduced funding because of competing internal priorities. The federal program manager said that the Enhanced Surveillance Program has to compete for funding with other internal high- priority activities, such as LEPs and infrastructure projects in a climate of overall agency funding constraints caused by, among other things, internal agency pressures to achieve budgetary savings to enable modernization of the stockpile and other priorities. In addition, Core Surveillance’s importance in detecting “birth defects” of weapons—the manufacturing defects or signs of aging in current components and materials—has increased, according to NNSA officials, as NNSA has undertaken and completed more LEPs. In fiscal year 2016, NNSA shifted the focus of some of its RDT&E efforts, including efforts in the Enhanced Surveillance Program, to meet the immediate needs of its ongoing and planned LEPs and related supporting efforts. According to NNSA officials, the funding and scope reductions in the Enhanced Surveillance Program reflect ongoing internal prioritization tensions within NNSA over meeting immediate needs—such as understanding current stockpile condition using traditional surveillance methods—and investing in the science, technology, and engineering activities needed to understand the impacts of aging on weapons and their components in the future. The Enhanced Surveillance Program federal program manager as well as other stakeholders, such as the JASON group of experts, noted funding changes may have a larger impact on the program than is immediately apparent. NNSA officials said that the program plays a considerably broader role in assessing the condition of the stockpile than its name suggests and supports a wide variety of efforts, including the statutorily required annual assessment process, weapons life extension and modernization programs, and ongoing efforts to maintain weapons systems. According to a 2014 NNSA analysis conducted by the Enhanced Surveillance Program’s federal program manager, slightly less than 15 percent of the program’s fiscal year 2014 budget allocation supported the development of diagnostic tools largely for Core Surveillance. About half of the program’s fiscal year 2014 budget allocation went to conducting aging studies, predictive modeling, and component and material evaluation studies that may support Core Surveillance but also benefit weapons life extension and modernization programs and ongoing efforts to maintain weapons systems, according to agency officials. The analysis found that about one-third of the Enhanced Surveillance Program’s fiscal year 2014 budget allocation went to activities supporting the annual assessment process and ongoing or planned LEPs. As of April 2016, NNSA was no longer pursuing the vision for the Enhanced Surveillance Program contained in the 2007 initiative and did not have a current long-term strategy for the program. Specifically, the fiscal year 2017 Stockpile Stewardship and Management Plan noted that NNSA refocused all of its RDT&E engineering activities—including the activities within the Enhanced Surveillance Program—on supporting more immediate stockpile needs and, according to the program’s federal program manager, NNSA has not developed a corresponding long-term strategy for the program. Enhanced Surveillance Program officials continue to focus on year-to-year management of the program under reduced funding levels to maintain key stockpile assessment capabilities, such as supporting Core Surveillance activities, the annual assessment process, and LEPs. Our previous work has demonstrated that a long-term strategy is particularly important for technology-related efforts such as the Enhanced Surveillance Program. Specifically, our April 2013 report found that for technology-related efforts, without a long-term strategy that provides an overall picture of what an agency is investing in, it is difficult for Congress and other decision makers to understand up front what they are funding and what benefits they can expect. In 1993, GPRA established a system for agencies to set goals for program performance and to measure results. GPRAMA, which amended GPRA, requires, among other things, that federal agencies develop long- term strategic plans that include agencywide goals and strategies for achieving those goals. Our body of work has shown that these requirements also can serve as leading practices for strategic planning at lower levels within federal agencies, such as NNSA, to assist with planning for individual programs or initiatives that are particularly challenging. Taken together, the strategic planning elements established under these acts and associated Office of Management and Budget guidance, and practices we have identified, provide a framework of leading practices in federal strategic planning and characteristics of good performance measures. For programs or initiatives, these practices include defining strategic goals, defining strategies that address management challenges and identify resources needed to achieve these goals, and developing and using performance measures to track progress in achieving these goals and to inform management decision making. Our review of NNSA documents and interviews with NNSA officials found that NNSA does not have a current long-term strategy for the Enhanced Surveillance Program defining the program’s strategic goals that includes these practices. Strategic goals explain the purpose of agency programs and the results—including outcomes—that they intend to achieve. The Enhanced Surveillance Program has general long-term goals, such as “developing tools and information useful to ensure the stockpile is healthy and reliable.” However, the program’s long-term goals do not provide outcomes that are measurable or that encompass the entirety of the program. NNSA officials told us they use annual goals, which help manage work on a yearly basis. For example, the program’s goals for fiscal year 2015 included “develop, validate and deploy improved predictive capabilities and diagnostics to assess performance and lifetime for nuclear and non-nuclear materials.” By managing work on an annual basis, longer-term work—such as technology development projects extended over several years—may receive a lower priority and thus, according to NNSA officials, may not be funded. In addition, NNSA funds the program’s annual requirements as part of the agency’s annual budget formulation process and funds the program in accordance with the agency’s internal process for allocating its budget authority. For fiscal year 2016, the agency budgeted funding for the program at a slightly higher level to meet stockpile requirements, such as surveillance, and the annual assessment process. However, without a current long-term strategy for the program, NNSA cannot plan for any management challenges that threaten its ability to meet its long-term strategic goals or the resources needed to meet those goals. Moreover, NNSA program officials told us that the agency has not defined specific quantifiable performance measures that could be used to track the program’s progress toward its long-term goals, as called for by leading practices. The need for NNSA to develop clear, measureable performance metrics for the Enhanced Surveillance Program has been highlighted in past reviews, namely by DOE’s Inspector General and by the JASON group. For example, in a September 2012 report, the Inspector General noted that NNSA’s performance measure for the program was based on the percentage of funding spent rather than on work accomplishments. Furthermore, a July 2013 memorandum from the director of the Office of Management and Budget to executive agency heads noted that, in accordance with OMB Circular A-11 and GPRAMA, agencies should describe the targeted outcomes of research and development programs using meaningful, measurable, quantitative metrics, where possible, and describe how they plan to evaluate the success of the programs. We found in past work that effective long-term planning is needed to guide decision making in programs, including laboratory research and development programs, so that congressional and other decision makers can better understand up front what they are funding and what benefits they can expect. As NNSA refocused its research and technology development efforts for the Enhanced Surveillance Program on LEPs and related activities and as NNSA officials said that they recognized the need for a new long-term strategy for the program, it is an opportune time to incorporate sound federal strategic planning practices. A new strategy for the program that incorporates outcome-oriented strategic goals, addresses management challenges and identifies resources needed to achieve these goals, and develops and uses performance measures to track progress in achieving goals would allow the agency to better inform long-term planning and management decision making for the program. By seeking to increase the nondestructive evaluations of nonnuclear components—work that was to be conducted under the Enhanced Surveillance Program—NNSA sought to reduce Core Surveillance’s backlog of mandated system-level tests requiring the dismantling of these components. However, NNSA did not fully implement its vision for the Enhanced Surveillance Program in its 2007 initiative. For example, rather than expanding the program, NNSA budgeted reduced funding for it, and the program did not complete the proposed evaluations of the effects of aging on nonnuclear components and materials. More recently, NNSA directed its RDT&E programs to focus on LEPs and related activities. This includes the Enhanced Surveillance Program. Enhanced Surveillance Program personnel have focused on year-to-year management of a program that has seen a nearly 50-percent funding reduction over the past decade and have not yet sought to redefine a strategy for how the program can best complement NNSA’s other efforts to assess the condition of the stockpile, including Core Surveillance. With funding appearing to have been stabilized and with NNSA’s adopting a different approach for all of its RDT&E programs, it is an opportune time to develop an Enhanced Surveillance Program strategy. A new long-term strategy for the program that incorporates outcome-oriented strategic goals, addresses management challenges and identifies resources needed to achieve these goals, and develops and uses performance measures to track progress in achieving goals would allow the agency to better inform long-term planning and management decision making for the program. To help ensure that NNSA can better inform long-term planning and management decision making as well as to ensure that the Enhanced Surveillance Program complements NNSA’s other efforts to assess the nuclear weapons stockpile, we recommend that the NNSA Administrator develop a long-term strategy for the Enhanced Surveillance Program that incorporates outcome-oriented strategic goals, addresses management challenges and identifies resources needed to achieve these goals, and develops and uses performance measures to track progress in achieving these goals. We provided a draft of this report to the NNSA Administrator for review and comment. In his written comments, the NNSA Administrator agreed with our recommendation that the agency develop a long-term strategy for the Enhanced Surveillance Program. The Administrator noted that the growth envisioned for the Enhanced Surveillance Program did not materialize as originally intended but that the agency remains committed to long-term success of the program. The Administrator noted that the agency estimated completing a long-term strategy for the program by June 2017. We are sending copies of this report to the appropriate congressional committees, the NNSA Administrator, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made key contributions to this report are listed in appendix II. David C. Trimble, (202) 512-3841 or [email protected]. In addition to the individual named above, Jonathan M. Gill (Assistant Director), Greg Campbell, William Horton, Nancy Kintner-Meyer, Rebecca Shea, and Kiki Theodoropoulos made key contributions to this report. | DOE participates in the annual process to assess the safety and reliability of the U.S. nuclear stockpile, which is now made up largely of weapons that are beyond their original design lifetimes. In 2007, faced with a mounting backlog of required tests, DOE's NNSA announced plans to use its Enhanced Surveillance Program for a more cost-effective surveillance approach under its 2007 Surveillance Transformation initiative. Under this initiative, predictive models were to assess the impact of aging on weapons in the stockpile without having to dismantle them as the agency has done in the past. The Senate Report accompanying the National Defense Authorization Act for Fiscal Year 2015 included a provision that GAO review the status of the Enhanced Surveillance Program. This report assesses the extent to which NNSA implemented the vision for the Enhanced Surveillance Program from its 2007 initiative and developed a long-term strategy for the program. GAO reviewed NNSA plans and budget and other documents; interviewed agency officials; and discussed surveillance issues with members of a group of nationally known scientists who advise the government and who reviewed the program in September 2013. The Department of Energy's (DOE) National Nuclear Security Administration (NNSA) did not fully implement the Enhanced Surveillance Program as envisioned in the agency's 2007 Surveillance Transformation Project (2007 initiative) and has not developed a long-term strategy for the program. Surveillance is the process of inspecting a weapon through various tests of the weapon as a whole, the weapon's components, and the weapon's materials to determine whether they are meeting performance expectations, through dismantling the weapon or through the use of diagnostic tools. As called for in its 2007 initiative, NNSA took steps to improve the management of the overall surveillance program, which primarily tests dismantled weapons and their components, but the agency did not increase the role of the Enhanced Surveillance Program, as envisioned. The program develops computational models to predict the impact of stockpile aging; identifies aging signs; and develops diagnostic tools. Under the 2007 initiative, NNSA was to conduct more Enhanced Surveillance Program evaluations using computer models to predict the impacts of aging on specific weapon components—especially nonnuclear components and materials—and to assess the validity of more diagnostic tools. Instead of expanding the program's role, NNSA reduced program funding by more than 50 percent from fiscal year 2007 to fiscal year 2015. NNSA also delayed some key activities and reduced the program's scope during this time. For example, NNSA did not complete its proposed evaluations of the impact of aging on nonnuclear components and materials. These evaluations, originally estimated to be completed by 2012, were dropped as program goals in fiscal year 2016, according to NNSA officials and contractor representatives. In fiscal year 2016, NNSA broadly refocused the Enhanced Surveillance Program on multiple nuclear weapon life-extension efforts and supporting activities but has not developed a corresponding long-term strategy for the program. Instead, program officials have focused on developing general long-term goals and managing the program on a year-to-year basis under reduced funding levels to maintain key stockpile assessment capabilities. These general goals, however, do not provide measureable outcomes or encompass the entirety of the program. In addition, as GAO's previous work has shown, managing longer term work, such as multiyear technology development projects, on an annual basis makes it difficult for Congress and other decision makers to understand up front what they are funding and what benefits they can expect. As a result, these projects may receive a lower priority and may not be consistently funded. GAO's body of work has identified a number of leading practices in federal strategic planning that include defining strategic goals, defining strategies and resources for achieving these goals, and developing and using performance measures to track progress in achieving these goals and to inform management decision making. A new strategy for the Enhanced Surveillance Program that incorporates outcome-oriented strategic goals, addresses management challenges and identifies resources needed to achieve these goals, and develops and uses performance measures to track progress in achieving goals would allow the agency to better inform long-term planning and management decision making for the program as well as help ensure that it complements NNSA's other efforts to assess the nuclear weapons stockpile. GAO recommends that the NNSA Administrator develop a long-term strategy for the Enhanced Surveillance Program that incorporates leading practices. NNSA concurred with GAO's recommendation and estimated completion of a long-term strategy by June 2017. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Drinking water and wastewater utilities are facing potentially significant investments over the next 20 years to upgrade an aging and deteriorated infrastructure, including underground pipelines, treatment, and storage facilities; meet new regulatory requirements; serve a growing population; and improve security. Adding to the problem is that many utilities have not been generating enough revenues from user charges and other local sources to cover their full cost of service. As a result, utilities have deferred maintenance and postponed needed capital improvements. To address these problems and help ensure that utilities can manage their needs cost-effectively, some water industry and government officials advocate the use of comprehensive asset management. Asset management is a systematic approach to managing capital assets in order to minimize costs over the useful life of the assets while maintaining adequate service to customers. While the approach is relatively new to the U.S. water industry, it has been used by water utilities in other countries for as long as 10 years. Each year, the federal government makes available billions of dollars to help local communities finance drinking water and wastewater infrastructure projects. Concerns about the condition of existing infrastructure have prompted calls to increase financial assistance and, at the same time, ensure that the federal government’s investment is protected. In recent years the Congress has been considering a number of proposals that would promote the use of comprehensive asset management by requiring utilities to develop and implement plans for maintaining, rehabilitating, and replacing capital assets, often as a condition of obtaining loans or other financial assistance. The federal government has had a significant impact on the nation’s drinking water and wastewater infrastructure by (1) providing financial assistance to build new facilities and (2) establishing regulatory requirements that affect the technology, maintenance, and operation of utility infrastructure. As we reported in 2001, nine federal agencies made available about $46.6 billion for capital improvements at water utilities from fiscal years 1991 through 2000. The Environmental Protection Agency (EPA) and the Department of Agriculture alone accounted for over 85 percent of the assistance, providing $26.4 billion and $13.3 billion, respectively, during the 10-year period; since then, the funding from these two agencies has totaled nearly $15 billion. EPA’s financial assistance is primarily in the form of grants to the states to capitalize the Drinking Water and Clean Water State Revolving Funds, which are used to finance improvements at local drinking water and wastewater treatment facilities, respectively. As part of the Rural Community Advancement Program, Agriculture’s Rural Utilities Service provides direct loans, loan guarantees, and grants to construct or improve drinking water, sanitary sewer, solid waste, and storm drainage facilities in rural communities. In addition to its financial investment, EPA has promulgated regulations to implement the Safe Drinking Water Act and Clean Water Act, which have been key factors in shaping utilities’ capital needs and management practices. For example, under the Safe Drinking Water Act, EPA has set standards for the quality of drinking water and identified effective technologies for treating contaminated water. Similarly, under the Clean Water Act, EPA has issued national minimum technology requirements for municipal wastewater utilities and criteria that states use to establish water quality standards that affect the level of pollutants that such utilities are permitted to discharge. Thus, the federal government has a major stake in protecting its existing investment in water infrastructure and ensuring that future investments go to utilities that are built and managed to meet key regulatory requirements. Drinking water and wastewater utilities will need to invest hundreds of billions of dollars in their capital infrastructure over the next two decades, according to EPA; the Congressional Budget Office; and the Water Infrastructure Network, a consortium of industry, municipal, state, and nonprofit associations. As table 1 shows, the projected needs range from $485 billion to nearly $1.2 trillion. The estimates vary considerably, depending on assumptions about the nature of existing capital stock, replacement rates, and financing costs. Given the magnitude of the projected needs, it is important that utilities adopt a strategy to manage the repair and replacement of key assets as cost-effectively as possible and to plan to sustain their infrastructure over the long term. Local drinking water and wastewater utilities rely primarily on revenues from user rates to pay for infrastructure improvements. According to EPA’s gap analysis, maintaining utility spending at current levels could result in a funding gap of up to $444 billion between projected infrastructure needs and available resources. However, EPA also estimates that if utilities’ infrastructure spending grows at a rate of 3 percent annually over and above inflation, the gap will narrow considerably and may even disappear. EPA’s report concludes that utilities will need to use some combination of increased spending and innovative management practices to meet the projected needs. The nation’s largest utilities—those serving populations of at least 10,000— account for most of the projected infrastructure needs. For example, according to EPA data, large drinking water systems represent about 7 percent of the total number of community water systems, but account for about 65 percent of the estimated infrastructure needs. Similarly, about 29 percent of the wastewater treatment and collection systems are estimated to serve populations of 10,000 or more, and such systems account for approximately 89 percent of projected infrastructure needs for wastewater utilities. Most of the U.S. population is served by large drinking water and wastewater utilities; for example, systems serving at least 10,000 people provide drinking water to over 80 percent of the population. Pipeline rehabilitation and replacement represents a significant portion of the projected infrastructure needs. According to the American Society of Civil Engineers, U.S. drinking water and wastewater utilities are responsible for an estimated 800,000 miles of water delivery pipelines and between 600,000 and 800,000 miles of sewer pipelines, respectively. According to the most recent EPA needs surveys, the investment needed for these pipelines from 1999 through 2019 could be as much as $137 billion. Several recent studies have raised concerns about the condition of the existing pipeline network. For example, in August 2002, we reported the results of a nationwide survey of large drinking water and wastewater utilities. Based on the survey, more than one-third of the utilities had 20 percent or more of their pipelines nearing the end of their useful life; and for 1 in 10 utilities, 50 percent or more of their pipelines were nearing the end of their useful life. In 2001, a major water industry association predicted that drinking water utilities will face significant repair and replacement costs over the next three decades, given the average life estimates for different types of pipelines and the years since their original installation. Other studies have made similar predictions for the pipelines owned by wastewater utilities. EPA and water industry officials cite a variety of factors that have played a role in the deterioration of utility infrastructure; most of these factors are linked to the officials’ belief that the level of ongoing investment in the infrastructure has not been sufficient to sustain it. For example, according to EPA’s Assistant Administrator for Water, the pipelines and plants that make up the nation’s water infrastructure are aging, and maintenance is too often deferred. He predicted that consumers will face sharply rising costs to repair and replace the infrastructure. Similarly, as the Water Environment Research Foundation reported in 2000, “years of reactive maintenance and minimal expenditures on sewers have left a huge backlog of repair and renewal work.” Our nationwide survey of large drinking water and wastewater utilities identified problems with the level of revenues generated from user rates and decisions on investing these revenues. For example: Many drinking water and wastewater utilities do not cover the full cost of service—including needed capital investments and operation and maintenance costs—through their user charges. Specifically, a significant percentage of the utilities serving populations of 10,000 or more—29 percent of the drinking water utilities and 41 percent of the wastewater utilities—were not generating enough revenue from user charges and other local sources to cover their costs. Many drinking water and wastewater utilities defer maintenance and needed capital improvements because of insufficient funding. About one-third of the utilities deferred maintenance expenditures in their most recent fiscal year; similar percentages of utilities reported deferring minor capital improvements and major capital improvements. About 20 percent of the utilities had deferred expenditures in all three categories. For many utilities, a significant disparity exists between the actual rehabilitation and replacement of their pipelines and the rate at which utility managers believe rehabilitation and replacement should occur. We found that only about 40 percent of the drinking water utilities and 35 percent of the wastewater utilities met or exceeded their desired rate of pipeline rehabilitation and replacement. The remaining utilities did not meet their desired rates. Roughly half of the utilities actually rehabilitated or replaced 1 percent or less of their pipelines annually. Utility managers also lack the information they need to manage their existing capital assets. According to our survey, many drinking water and wastewater utilities either do not have plans for managing their assets or have plans that may not be adequate in scope or content. Specifically, nearly one-third of the utilities did not have plans for managing their existing capital assets. Moreover, for the utilities that did have such plans, the plans in many instances did not cover all assets or did not contain one or more key elements, such as an inventory of assets, assessment criteria, information on the assets’ condition, and the planned and actual expenditures to maintain the assets. Comprehensive asset management has gained increasing recognition within the water industry as an approach that could give utilities the information and analytical tools they need to manage existing assets more effectively and plan for future needs. Using asset management concepts, utilities and other organizations responsible for managing capital infrastructure can minimize the total cost of designing, acquiring, operating, maintaining, replacing, and disposing of capital assets over their useful lives, while achieving desired service levels. Figure 1 shows some of the basic elements of comprehensive asset management and how the elements build on and complement each other to form an integrated management system. Experts within and outside the water industry have published manuals and handbooks on asset management practices and how to apply them. While the specific terminology differs, some fundamental elements of implementing asset management appear consistently in the literature. Collecting and organizing detailed information on assets. Collecting basic information about capital assets helps managers identify their infrastructure needs and make informed decisions about the assets. An inventory of an organization’s existing assets generally should include (1) descriptive information about the assets, including their age, size, construction materials, location, and installation date; (2) an assessment of the assets’ condition, along with key information on operating, maintenance, and repair history, and the assets’ expected and remaining useful life; and (3) information on the assets’ value, including historical cost, depreciated value, and replacement cost. Analyzing data to set priorities and make better decisions about assets. Under asset management, managers apply analytical techniques to identify significant patterns or trends in the data they have collected on capital assets; help assess risks and set priorities; and optimize decisions on maintenance, repair, and replacement of the assets. For example: Life-cycle cost analysis. Managers analyze life-cycle costs to decide which assets to buy, considering total costs over an asset’s life, not just the initial purchase price. Thus, when evaluating investment alternatives, managers also consider differences in installation cost, operating efficiency, frequency of maintenance and repairs, and other factors to get a cradle-to-grave picture of asset costs. Risk/criticality assessment. Managers use risk assessment to determine how critical the assets are to their operations, considering both the likelihood that an asset will fail and the consequences—in terms of costs and impact on the organization’s desired level of service—if the asset does fail. Based on this analysis, managers set priorities and target their resources accordingly. Integrating data and decision making across the organization. Managers ensure that the information collected within an organization is consistent and organized so that it is accessible to the people who need it. Among other things, the organization’s databases should be fully integrated; for instance, financial and engineering data should be compatible, and ideally each asset should have a unique identifier that is used throughout the organization. Regarding decision making, all appropriate units within an organization should participate in key decisions, which ensures that all relevant information gets considered and encourages managers to take an organizationwide view when setting goals and priorities. Linking strategy for addressing infrastructure needs to service goals, operating budgets, and capital improvement plans. An organization’s goals for its desired level of service—in terms of product quality standards, frequency of service disruptions, customer response time, or other measures—are a major consideration in the organization’s strategy for managing its assets. As managers identify and rank their infrastructure needs, they determine the types and amount of investments needed to meet the service goals. Decisions on asset maintenance, rehabilitation, and replacement are, in turn, linked to the organization’s short- and long-term financial needs and are reflected in the operating budget and capital improvement plan, as appropriate. Implementing the basic elements of asset management is an iterative process that individual organizations may begin at different points. Within the water industry, for example, some utilities may start out by identifying their infrastructure needs, while other utilities may take their first step by setting goals for the level of service they want to provide. The interrelationship between the elements of asset management can alter an organization’s strategy for managing its assets. For example, once an organization has completed a risk assessment, it may scale back its efforts to compile a detailed inventory of assets to focus initially on those assets determined to be critical. Similarly, as information on infrastructure needs and priorities improves, managers reexamine the level of planned investments, considering the impact on both revenue requirements and the level of service that can be achieved. According to advocates of asset management, while many organizations are implementing certain aspects of the process, such as maintaining an inventory of assets and tracking maintenance, these organizations are not realizing the full potential of comprehensive asset management unless all of the basic elements work together as an integrated management system. As the description of asset management indicates, implementing this approach is not a step-by-step, linear process. Asset management is an integrated system that utilities and other organizations can implement in a number of different ways, depending on what makes sense for their particular organization. In the United States, some drinking water and wastewater utilities, for example, are taking a more strategic approach, initially investing their resources in planning for asset management. Other utilities are focusing initially on collecting data. Another variation is that some utilities are adopting asset management on a utilitywide basis, while others are piloting the approach at a single facility or department or are targeting critical assets utilitywide. The level of sophistication with which asset management concepts are applied within a utility can also vary, depending on the size and complexity of the operations and the resources that the utility can devote to implementation. Comprehensive asset management is a relatively new concept for drinking water and wastewater utilities in the United States. According to EPA and major water industry organizations, few utilities are implementing comprehensive asset management, and those that have done so are almost exclusively larger entities. In addition, for the most part, the domestic utilities that have adopted asset management are in the early stages of implementation. Few utilities have been involved in the process for longer than 2 to 3 years. Although relatively new to the U.S. water industry, comprehensive asset management has been used for about 10 years by water utilities in Australia and New Zealand, where the national governments have strongly endorsed the concept. In each case, the driving force behind the use of asset management was legislation that called for water utilities to improve their financial management. In Australia, the law requires utilities to recover the full cost of service, while in New Zealand the law requires utilities to depreciate their assets annually and use cost-benefit analysis, among other things. The national governments of Australia and New Zealand each published guidebooks on asset management, and engineering groups in the two countries jointly developed a comprehensive manual on managing infrastructure assets. Asset management is seen as a means of improving utility infrastructure elsewhere in the world. For example, in the United Kingdom, utilities must develop asset management plans that identify the level of investment required to maintain and improve capital assets every 5 years; annual audits help ensure that planned improvements are made. Similarly, in 2002, the legislature in Ontario, Canada enacted a law requiring municipalities to develop plans for recovering the full cost of service to ensure that drinking water and wastewater systems are adequately funded. The Ranking Minority Member, Senate Committee on Environment and Public Works, asked us to examine the use of comprehensive asset management at drinking water and wastewater utilities in the United States. This report examines (1) the potential benefits of asset management for water utilities and the challenges that could hinder its implementation and (2) the role that the federal government might play in encouraging utilities to implement comprehensive asset management. To conduct our work, we reviewed relevant studies, handbooks, training materials, and other documents related to comprehensive asset management and its implementation, particularly for managing the infrastructure at drinking water and wastewater utilities. At the federal level, we obtained information from EPA’s Office of Ground Water and Drinking Water and Office of Wastewater Management, the offices that, along with the states, are responsible for overseeing drinking water and wastewater utilities. We also obtained information on other federal agencies with experience in asset management, predominantly the Federal Highway Administration in the U.S. Department of Transportation, and financial standards promulgated by the Governmental Accounting Standards Board. For site-specific information, our review included over 50 individual utilities from the United States, Australia, and New Zealand— including 15 U.S. utilities at which we conducted structured interviews. Other sources of information included the following: state associations, including the Association of State Drinking Water Administrators and the Association of State and Interstate Water Pollution Control Administrators; major industry groups, including the American Public Works Association, American Water Works Association, Association of Metropolitan Sewerage Agencies, Association of Metropolitan Water Agencies, National Association of Water Companies, National Rural Water Association, Water Environment Federation, and Water Services Association of Australia; engineering and consulting firms with experience in helping utilities implement asset management, including Brown and Caldwell; CH2M Hill; Metcalf and Eddy, Inc.; Municipal and Financial Services Group; PA Consulting Group; and Parsons Corporation in the U.S.; GHD Pty. Ltd. in Australia; and Meritec in New Zealand; several state and regional regulatory agencies in Australia and New EPA-funded state and university-based training and technical assistance centers. To obtain information on the benefits and challenges of asset management, we conducted initial interviews with 46 domestic drinking water and wastewater utilities that knowledgeable government and water industry officials identified as implementing comprehensive asset management. To obtain more detailed information, we conducted structured interviews with officials from 15 of the 46 utilities. We selected the 15 utilities based on two criteria: (1) they reported or anticipated achieving quantitative benefits from asset management or (2) they represented smaller entities. (See app. I for a list of the 15 utilities we selected for structured interviews.) In total, 12 of the 15 utilities were relatively large, serving populations ranging from 300,000 to 2,500,000; the remaining three were significantly smaller, serving populations ranging from 3,000 to 67,100. Because of the small number of utilities that we interviewed in depth and the way in which they were selected, our results are not generalizable to the larger universe of domestic drinking water and wastewater utilities. Because of the utilities’ limited experience in implementing asset management, we supplemented the information obtained from domestic utilities with information from six utilities and five government agencies in Australia and New Zealand, two countries that have taken the lead in implementing comprehensive asset management. (See app. II for a list of the utilities and government agencies we contacted in Australia and New Zealand.) Outside the water industry, we consulted with the Private Sector Council, which identified two companies—The Gillette Company and SBC Communications, Inc.—with long-standing experience in using comprehensive asset management in their respective fields. We interviewed officials from these companies to obtain their perspectives on the benefits and challenges of implementing asset management. For information on the potential federal role in promoting asset management at water utilities, we obtained information from EPA’s Office of the Chief Financial Officer, Office of Ground Water and Drinking Water, and Office of Wastewater Management on the activities that EPA is currently sponsoring, including the development of informational materials on asset management; activities by EPA-funded, state and university-based training and technical assistance centers; and various studies and research projects. We also discussed options for a federal role in promoting asset management with officials from water industry associations, EPA, and the 15 utilities selected for structured interviews. In addition, with the help of organizations and officials experienced in asset management, we identified the U.S. Department of Transportation as being at the forefront of federal involvement in this issue. We obtained and reviewed information about the department’s initiatives from the Office of Asset Management within the Federal Highway Administration. We conducted our work between March 2003 and March 2004 in accordance with generally accepted government auditing standards. We provided a draft of this report to EPA for review and comment. We received comments from officials within EPA’s Office of Water and Office of the Chief Financial Officer, who generally agreed with the information presented in the report and our recommendations. They further noted that while EPA has played a major role in bringing asset management practices to the water industry, significant additional activity could be undertaken, and they have placed a high priority on initiating activities similar to those we suggested. The officials also made technical comments, which we incorporated as appropriate. While comprehensive asset management is relatively new to most drinking water and wastewater utilities in the United States, some utilities say they have already benefited from this approach and have also encountered certain challenges. The utilities reported benefiting from (1) improved decision making because they have better information about their capital assets and (2) improved relationships with governing authorities, ratepayers, and other stakeholders because they are better able to communicate information on infrastructure needs and improvement plans. While water industry officials identified benefits associated with comprehensive asset management, we found that reported savings should be viewed with caution. Among the challenges of implementing asset management, utility officials cited the difficulty of (1) collecting the appropriate data and managing it efficiently and (2) making the cultural changes necessary to integrate information and decision making across departments. In addition, the officials reported that the short-term budget and election cycles typical of utility governing bodies make it difficult to meet the long-term capital investment planning needs of asset management. Although smaller utilities face more obstacles to implementing asset management than larger utilities, principally because of limited resources, they can also benefit from applying asset management concepts. U.S. utilities expect to reap significant benefits from the data they collect, analyze, and share through an asset management approach. With these data, utilities expect to make more informed decisions on maintaining, rehabilitating, and replacing their assets, thereby making their operations more efficient. Utilities can also use these data to better communicate with their governing bodies and the public, which should help them to make a sound case when seeking rate increases. Although water industry officials identified financial and other benefits from using asset management, reported savings should be viewed with caution because, for instance, comprehensive asset management may be implemented concurrently with other changes in management practices or operational savings may be offset by increases in capital expenditures. Collecting, sharing, and analyzing data through comprehensive asset management can help utilities to make more informed decisions about maintaining, rehabilitating, and replacing their assets. In particular, utilities can use the information collected and analyzed to prevent problems and allocate their maintenance resources more effectively. For example: Better information enabled the Massachusetts Water Resources Authority to improve its maintenance decisions and eliminate some unneeded maintenance activities. For example, in an effort to optimize maintenance practices in one of their treatment plants, utility officials reassessed maintenance practices for 12 equipment systems, such as different types of pumps. By using the assessment results to improve maintenance planning for these assets, the utility decreased the labor hours spent on preventive maintenance by 25 percent from the hours recommended by the original equipment manufacturers, according to utility officials. Similarly, in analyzing its maintenance practices, the Massachusetts Water Resources Authority found it was lubricating some equipment more often than necessary. By decreasing the frequency of oil changes, the utility reported it saved approximately $20,000 in oil purchase and disposal costs. In addition, the utility extended the life of its assets by decreasing the lubrication—over-lubrication can cause equipment parts to fail prematurely. Seattle Public Utilities used asset management to better target its maintenance resources. As part of the utility’s asset management strategy, officials used a risk management approach, calculating the likelihood and impact of a rupture for the utility’s sewer and drainage pipes. To determine the likelihood of rupture, officials considered such factors as a pipe’s age, material, and proximity to a historical landfill or steep slope. To determine the impact of a rupture, they examined factors such as a pipe’s size, location, and historical cost of repair. As a result of this analysis, utility officials identified 15 percent of their pipes as high risk, or “critical”—such as larger, older pipes located beneath downtown Seattle. They shifted resources to maintain and rehabilitate these pipes. The officials considered the remaining 85 percent of pipes as noncritical, or, lower risk, because their failure was less likely or because a breakage would affect a limited number of customers, be repaired relatively quickly, and require minimal resources. For these pipes, the utility decided not to perform any preventive maintenance activities, only making repairs as needed. By taking this approach, utility officials believe they are using their staff resources more efficiently and that, over time, they will reduce their maintenance costs. Comprehensive asset management also helps managers to make more informed decisions about whether to rehabilitate or replace assets, and once they decide on replacement, to make better capital investment decisions. For example: According to utility managers at the Louisville Water Company, the utility developed its Pipe Evaluation Model in the early 1990s as a tool for ranking its 3,300 miles of aging pipes and water mains for rehabilitation and replacement. The pipe program includes many of the key principles and practices of comprehensive asset management: for instance, it integrated data about the age of the pipes with data about their maintenance history. In analyzing this information, managers discovered that two vintages of pipes—those built between 1862 and 1865 and between 1926 and 1931—had the highest number of breaks per 100 miles of pipeline. Consequently, they decided to replace the pipes from those two periods. The model also showed that pipes installed between 1866 and 1925 were fairly reliable, thus these pipes were targeted for rehabilitation rather than replacement. The utility is lining the interior of these pipes with cement, which is expected to extend their life by about 40 years. Furthermore, utility managers told us that their pipe model and other practices that use asset management principles have helped reduce the frequency of water main breaks from 26 to 22.7 per hundred miles and the frequency of leaks from joints from 8.2 to 5.6 per hundred miles. In implementing its asset management approach, managers at the Sacramento Regional County Sanitation District reassessed a proposed investment in new wastewater treatment tanks and decided on a less expensive option, thereby saving the utility approximately $12 million. During this reassessment, managers found that increasing preventive maintenance on existing tanks would lower the risk of shutdown more cost-effectively than adding a new set of tanks. Utility officials commented that their implementation of asset management helped change their decision-making process by, among other things, bringing together staff from different departments to ensure more complete information, and more effectively using the data to understand investment options. As a part of its asset management strategy, Seattle Public Utilities established an asset management committee, comprised of senior management from various departments, to ensure appropriate decision making about the utility’s capital improvement projects. For every capital improvement project with an expected cost over $250,000, project managers must submit a plan to the committee that (1) defines the problem to be solved, (2) examines project alternatives, (3) estimates the life-cycle costs of the alternatives, (4) analyzes the possible risks associated with the project, and (5) recommends an alternative. According to utility officials, implementing this process has led to deferring, eliminating, or altering several capital improvement projects, and contributing to a reduction in the utility’s 2004 capital improvement project budget for water of more than 8 percent. For instance, after drafting new water pressure standards, the utility eliminated the need for some new water mains. It developed an alternative plan to provide more localized solutions to increase water pressure, resulting in expected savings of $3 million. In another case, the utility reassessed alternatives to replacing a sewer line located on a deteriorating trestle, ultimately opting to restore and maintain the existing wood trestle and make spot repairs to the sewer line, which resulted in an estimated savings of $1.3 million. Finally, comprehensive asset management helps utilities share information across departments and coordinate planning and decision making. In this way, utility managers can reduce duplication of efforts and improve the allocation of staff time and other resources. For example, managers at Eastern Municipal Water District used asset management to improve their business practices, which they saw as compartmentalized and inefficient. In one instance, they examined their decentralized maintenance activities. The utility had two maintenance crews who worked throughout the system, in different shifts and reported to managers at four different facilities. In addition, the utility’s work order system was inefficient; for example, when different crew members independently reported the same maintenance need, managers did not notice the duplication because the problem was described in different terms (e.g., as a “breaker failure” by one crew member and as a “pump failure” by another). Finally, in some instances, work crews would arrive at a site only to find that needed maintenance work had already been completed. To improve the system, utility officials (1) centralized maintenance by making one person responsible for scrutinizing and setting priorities for all work orders and (2) established a standardized classification of assets, which helped maintenance staff use the same terminology when preparing work orders. Utility officials report that taking these steps allowed them to identify and eliminate work orders that were unnecessary, already completed, or duplicates, which ultimately reduced their maintenance work backlog by 50 percent. The private sector companies we visited agreed that using a comprehensive asset management approach improved their decision making. Specifically, by improving their data, analyzing these data, and centralizing management decision making, managers at SBC Communications, Inc., reported that they have made better capital investment decisions and allocated resources more efficiently. Managers at The Gillette Company reported that they consider life-cycle costs and other factors to assess investment alternatives and, ultimately, make better investment decisions. The utilities we contacted reported that comprehensive asset management also benefits their relations with external stakeholders by (1) making a sound case for rate increases to local governing bodies and ratepayers; (2) improving their bond rating with credit rating agencies, and (3) better demonstrating compliance with federal and state regulations. Some utilities have used, or expect to use, the information collected through comprehensive asset management to persuade elected officials to invest in drinking water and wastewater infrastructure through rate increases. For example, the Louisville Water Company reported that in the early 1990s it used the asset information it had gathered and analyzed to convince its local governing board that its current rates would not cover its expected costs and that the utility needed a rate increase to cover its anticipated rehabilitation and replacement needs. The board approved a set-aside of $600,000 for an infrastructure rehabilitation and replacement fund as a part of the requested rate increase in 1993, and, according to one utility official, has been supportive of including funds for asset rehabilitation and replacement as a part of rate requests since then. Furthermore, the utility manager requested that the amount of the set-aside gradually increase to $3 million over the next 5 years. According to this official, the board not only approved this request, it also increased the rates to support the fund sooner than the utility manager had requested. According to several other utilities that have begun to implement comprehensive asset management, this approach should enable them to justify needed rate increases from their governing bodies. Similarly, Australian and New Zealand officials we interviewed stated that the data from asset management helps utilities make a more credible case for rate increases from their governing bodies. Utility managers can also use the information they provide to their governing boards as a basis for evaluating and deciding on trade-offs between service levels and rates. For example, according to an official at South Australian Water Corporation, using asset management practices, he was able to suggest a range of funding alternatives to the utility’s governing body. The utility managers conducted statistical modeling on the asset information they collected (e.g., pipe performance history and financial information) and, using this analysis, predicted the approximate number of pipe breaks at various levels of funding. Understanding the trade-offs between lower rates and higher numbers of pipe breaks, the governing body could make an informed decision about what the appropriate level of service was for their community. Comprehensive asset management also has the potential to improve a utility’s bond rating, a benefit that translates into savings through lower interest rates on loans and bonds. When deciding on a utility’s bond rating, credit rating agencies consider criteria related to comprehensive asset management, such as the utility’s management strategies and its planning for asset replacement. For example, according to a representative from one credit rating agency, asset management shows that a utility is considering future costs. He would therefore expect a utility with an asset management plan that looks at future capital and operating costs and revenues to receive a higher bond rating than a utility that does not sufficiently consider those future needs, even if that utility has a better economy and a higher tax base. Some local officials believe that comprehensive asset management played a role in the bond ratings they received, or will do so in the future. For example, the finance director of the small northeastern city of Saco, Maine, told us that she believes that the city’s decision to use asset management practices—such as maintaining an up-to-date asset inventory, periodically assessing the condition of the assets, and estimating the funds necessary to maintain the assets at an acceptable level each year—contributed to the credit rating agencies’ decision to increase the city’s bond rating, which resulted in an expected savings of $2 million over a 20-year period. Similarly, a utility official at Louisville Water Company told us that asset management practices, such as strategically planning for the rehabilitation and replacement of its aging assets, helps the utility maintain its strong bond rating. According to several utility managers we interviewed, comprehensive asset management can be used to help comply with regulations. For example: Comprehensive asset management practices played a role in improving their utility’s compliance with existing regulations. Specifically, among other things, asset management practices such as identifying and maintaining key assets led to fewer violations of pollutant discharge limitations under the Clean Water Act. At Western Carolina Regional Sewer Authority, for instance, the number of these violations decreased from 327 in 1998 (about the time that the utility began implementing asset management) to 32 violations in 2003. At the Charleston Commissioners of Public Works, utility officials told us that if they had not had asset management in place it would be difficult to meet the rehabilitation program and maintenance program elements of EPA’s draft capacity, management, operation, and maintenance regulations for wastewater utilities. For instance, the draft regulations would require that wastewater utilities identify and implement rehabilitation actions to address structural deficiencies. Because the utility has implemented asset management practices, such as assessing the condition of its pipes and identifying those most in need of rehabilitation, it can better target its resources to rehabilitate pipes in the worst condition, and, in the process, meet the proposed standards for rehabilitation. Many of the U.S. utilities we interviewed were still in the early stages of implementing asset management and most had not measured financial savings. However, many water industry officials expect asset management to result in overall cost savings. Specifically, several officials told us they expect that asset management will slow the rate of growth of utilities’ capital, operations, and maintenance costs over the coming years. Nevertheless, total costs will rise because of the need to replace and rehabilitate aging infrastructure. At least one U.S. utility has estimated the overall savings it will achieve using comprehensive asset management. Specifically, an engineering firm projected that asset management would reduce life-cycle costs for the Orange County Sanitation District by about $350 million over a 25-year period. Among other data, the engineering firm used the utility’s available operating expenditure information (operations, maintenance, administration, and depreciation data) and capital improvement program expenditures (growth/capacity, renewal/replacement, and level of support data) to model the projected life-cycle cost savings. Additionally, some of the Australian utilities we interviewed reported financial savings. For example, officials at Hunter Water Corporation reported significant savings in real terms between fiscal years 1990 and 2001: a 37 percent reduction in operating costs; improved service standards for customers, as measured by such factors as water quality and the number of sewer overflows; and a reduction of more than 30 percent in water rates for customers. Hunter Water officials believe that they achieved these efficiencies as a result of asset management. Though utility officials have made some attempts to quantify the impact of asset management, they also cited reasons for exercising caution in interpreting reported savings and other benefits. First, benefits such as operating cost reductions should not be considered in isolation of other utility costs. A utility cannot consider reductions in operating costs a net benefit if, for instance, savings in operational costs are offset by an increase in the utility’s capital expenditures. Furthermore, reductions in operating costs may be caused by increases in capital expenditures because, for example, newer assets may require less maintenance and fewer repairs. In the case of the Hunter Water Corporation, the utility’s capital expenditures were at about the same level in 2001 as in 1991, despite some fluctuation over the period. Second, other factors might have contributed to financial and other benefits. For example, a utility may be implementing other management initiatives concurrently with asset management and may not be able to distinguish the benefits of the various initiatives. In addition to using an asset management approach, for instance, some U.S. utilities we interviewed used an environmental management system, which shares some of the same components as asset management. Some of these utilities told us that they could not separate the benefits of asset management from those achieved as a result of their environmental management systems. In addition, reported savings from asset management can be misleading without complete information on how the savings estimates are derived. For example, a widely distributed graph shows an estimated 15 percent to 40 percent savings in life-cycle costs for 15 wastewater utilities in Australia. EPA and others used the graph as a basis for projecting savings for U.S. utilities. However, the graph was mislabeled at some point—the reported reductions in life-cycle costs were actually reductions in operating costs. As we have already noted, operating costs reductions alone do not provide enough information to determine the net benefit of implementing asset management. Despite the acknowledged benefits of comprehensive asset management, utilities face three key challenges that may make implementing this approach difficult. First, to determine the condition of current assets and the need for future investment, utilities have to gather and integrate complete and accurate data, which may require significant resources. Second, successful implementation requires cultural change—departments long accustomed to working independently must be willing to coordinate and share information. Finally, utilities may find that their efforts to focus on long-term planning conflict with the short-term priorities of their governing bodies. These three challenges may be more difficult for smaller utilities because they have fewer financial, staff, and technical resources. The difficulties utilities experience gathering data to implement asset management depend on the (1) condition of their existing data, (2) ability to coordinate existing data across departments, (3) need to upgrade technology, and (4) ability to sustain complete and accurate data. One industry official noted that larger utilities, in particular, may have a more difficult time gathering and coordinating data because they typically possess a substantial number of assets. Nevertheless, utility officials and water association representatives agree that utilities should not allow these data challenges to prevent them from implementing asset management. These officials emphasized that utilities should begin implementing asset management by using the data they already possess, continuing data collection as they perform their routine repair and maintenance activities, or focusing data collection efforts on their most critical assets. Domestic and international water officials emphasize the importance of obtaining, integrating, and sustaining good data for decision making. This is no small challenge. According to the Association of Metropolitan Sewerage Agencies and the International Infrastructure Management Manual, utilities generally need the following types of data to begin implementing asset management: age, condition, and location of the assets; asset size and/or capacity; valuation data (e.g., original and replacement cost); installation date and expected service life; maintenance and performance history; and construction materials and recommended maintenance practices. According to utility officials and industry handbooks, utilities sometimes have incomplete or inaccurate historical data about their assets. For example: An official at the Augusta County Service Authority noted that the utility did not possess a great deal of detailed historical data about its assets. For example, its asset ledger would indicate that “a pump station was installed at a particular location in 1967,” but would not provide any additional information about the assets, such as the individual components that make up this system. Similarly, the official told us that the utility’s prior billing system did not maintain historical data about its customers’ water usage rates. As a result, the management team found it difficult to adequately forecast their needed rate increases because they lacked historical information about water consumption. According to an East Bay Municipal Utility District official, the utility lacked detailed maintenance data on its assets before 1990 because maintenance workers had not consistently reported repairs to a central office. Given these problems, utility managers may have to invest a significant amount of time and resources to gather necessary data, particularly data about the condition of their thousands of miles of buried pipelines. Understandably, utilities are unwilling to dig up their pipelines to gather missing data. However, utilities may be able to derive some information about the condition of these pipes to the extent they have information on the pipes’ age, construction material, and maintenance history. In addition, utilities may choose to align their data collection with their ongoing maintenance and replacement activities. These approaches, however, may require new technology, which may mean a financial investment. For example: Tacoma Water equipped its staff with laptop computers, which allows them to access their geographic information system—software that can track where assets are located—while they are in the field. As the staff perform their routine repair and rehabilitation activities, they can record and update data about an asset’s condition, performance, and maintenance history. Similarly, the Department of Public Works in Billerica, Massachusetts, provided its field staff with handheld electronic devices programmed with a simple data collection template, which allows its staff to more accurately record information about its assets and their condition. Consequently, the field staff can enter more accurate information about the utility’s assets into its central asset inventory. Utilities also reported difficulty collecting and applying information about the manufacturer’s recommended techniques for optimizing their maintenance practices for their assets. Since no central clearinghouse of information on optimal maintenance practices is readily available, these utilities have had to invest their own time and resources to develop this information. For example: According to an official at Des Moines Water Works, the utility discovered that the manufacturer’s recommended maintenance practices often conflicted with the utility’s experience with the same asset. This official pointed out that the manufacturer’s estimate for maintenance was always higher than the utility’s experience. Given these inconsistencies, the official noted, all utilities would benefit from the development of a central industry clearinghouse that provided information about the recommended maintenance practices for certain assets. Similarly, an official at East Bay Municipal Utility District noted a significant difference between the manufacturer’s recommended maintenance practices and the utility’s experience with optimized maintenance. As a result, the utility has invested a significant amount of time in developing optimal maintenance practices for its assets and minimizing the risk of asset failure. While utilities need complete and accurate data for decision making, they also need to balance data collection with data management. Utilities may fall prey to data overload—collecting more data than they have the capacity to manage. For example, according to an official at the Augusta County Service Authority, while the utility has collected extensive infrastructure data, it has not invested enough of its resources into making these data useful for decision making. This official told us that utilities need to develop a data management strategy that identifies the types of data they need and the uses of these data for decision making. Without such a strategy, utilities gathering data will reach a point of diminishing returns. According to an official at the National Asset Management Steering Group in New Zealand, utilities should begin to implement asset management by identifying their critical assets and targeting their data- gathering activities toward the critical information they need in order to make decisions about these assets. An official also recommended that utilities begin implementation by using their existing data—even though the data may not be completely accurate—and refine this information as they improve and standardize their data collection processes. According to utility officials, coordinating data can be difficult because the data come from several different departments and from different sources within the departments. Furthermore, one industry handbook notes that a utility’s departments typically maintain different types of data about the same assets, which are formatted and categorized to meet each department’s individual needs and objectives. For example, the finance department may record an asset’s size in terms of square footage, while the engineering department may define an asset’s size in terms of pipeline diameter. Utilities adopting asset management need to coordinate these data to develop a central asset inventory. Table 2 shows the typical sources of data for a central inventory. Utility managers told us it was challenging to develop a standard data format for their central asset inventories. For example: As previously noted, Eastern Municipal Water District’s work order system was inefficient because crew members from different facilities did not use the same terms in describing maintenance problems. To eliminate these inefficiencies, the utility invested a great deal of time and resources to standardize its terms and asset classification and implement a computerized maintenance management system. According to a Louisville Water Company official, improving and validating the utility’s data was a challenge. Over the years, the utility has acquired between 12 and 20 smaller utilities. Each of these smaller utilities maintained its own asset data, which were not always reliable or maintained in the same format. The utility invested a great deal of time to validate these data and coordinate them into its central asset inventory. Similarly, according to an official at the South Australian Water Corporation, developing a central asset inventory was particularly difficult because each of the utility’s departments used different terms to refer to the same asset. The utility refined its data collection practices by training its employees on how to record data in a standard format. The utility officials we spoke to also had to address problems in coordinating data maintained in different and incompatible software programs. A Water Environment Research Foundation survey of utility managers, regulators, and industry consultants cited developing an asset information management system that meets the needs of all users as the most difficult element of asset management to implement. Without an integrated information management system, utilities found it difficult to develop data for decision making, and they found that they had to invest time and money to enter these data into a central database. For example: According to a Greater Cincinnati Water Works official, the utility wanted to integrate information about its assets’ location and maintenance history to efficiently dispatch staff to repair sites. However, the data for this report were stored in two separate and incompatible computer systems. To produce this information, the utility needed to re-enter the relevant data from each of these systems into a central asset database. Similarly, an official at Melbourne Water Corporation said that as his utility began to adopt asset management, it realized that it maintained relevant data in different computer systems, such as its computerized maintenance management system and its geographic information system. To address this fragmentation, the utility had to assign staff to consolidating its data into a central database to allow for easy integration. As utilities coordinate their data systems, they may need to upgrade their existing technology, which can represent a significant financial investment. For example, Augusta County Service Authority has requested $100,000 to purchase data integration software, which would allow it to coordinate information from several different computer systems. However, as of September 2003, this request had not been approved, in part because the software may not directly affect the utility’s profits or improve its service, making the governing body reluctant to finance the purchase. Similarly, St. Paul Regional Water Services recognized that it would need to purchase a geographic information system as the basis for integrating all departments’ data. However, the official noted that the utility could not purchase this system for another 4 years because it would cost several million dollars to purchase the system, enter data, and train its staff to operate the new system. As utilities continue to obtain and integrate data, they still face the challenge of maintaining complete and accurate data about their assets. The International Infrastructure Management Manual notes that data collection is a continuous process and that utilities need to remain consistent in gathering data and updating their central asset inventory as they repair, replace, or add infrastructure. Regular updating ensures that the information remains useful over time. To sustain the benefits garnered from its efforts to compile an accurate inventory, the Eastern Municipal Water District adopted a policy whereby employees must document changes to the inventory whenever assets are added, repaired, or removed. The utility has also developed methods to enforce its policy to make sure that the inventory is updated as required. According to industry officials, one of the major challenges to implementing asset management is changing the way utilities typically operate—in separate departments that do not regularly exchange information. It is essential to change this management culture, these officials believe, to encourage interdepartmental coordination and information sharing. To encourage interdepartmental communication, utilities may have to train their employees in using the resources of other departments. For example, at the Orange County Sanitation District, the management team found it difficult to demonstrate to its employees that their job responsibilities do indeed affect the functions of the other departments. The utility’s field staff possesses extensive information about the condition and performance of assets because they maintain these assets every day. However, these employees did not understand that the engineering department needs feedback on how the assets that the engineering department constructed are performing in the field. Such feedback could change future designs for these assets to improve their performance. As the utility implemented asset management, it established a work group to examine the conditions of asset failure, which provided a forum for the maintenance and engineering departments to collaborate. While this work group is still ongoing, one utility official noted that collaboration between these two departments will result in more efficient maintenance schedules for the utility’s assets. Similarly, the Eastern Municipal Water District reported that its middle- management team resisted some of the asset management changes because they believed these changes would limit their authority to manage their staff and workload. Before asset management, the utility maintained four different treatment facilities, each with its own maintenance staff. The utility believed that it could optimize its maintenance resources by combining all of the maintenance activities and staff at the four plants under one department. However, the managers at these treatment plants were reluctant to relinquish managerial control over their maintenance staff and feared that their equipment would be neglected. Once the new maintenance department was formed, however, these plant managers realized that centralizing these functions resulted in faster maintenance because the larger team could more effectively allocate time among the four facilities. In some instances, utility employees may be reluctant to accept comprehensive asset management because it requires them to take on additional responsibilities when they are already pressed for time in their “day jobs.” Additional time may indeed be necessary. According to officials at different utilities we visited, asset management requires staff throughout the organization to attend a variety of training programs— introductory, refresher, and targeted training by function or job—to ensure that they understand the value of asset management to both their own jobs and the operation of the utility. While asset management provides utilities with information to justify needed rate increases, their justifications may not be effective because their governing body and their customers want to keep rates low. According to utility officials, governing bodies’ reluctance to increase rates may be linked to constituent pressure to hold down user rates. In 2002, we reported that 29 percent of drinking water and 41 percent of wastewater utilities serving populations over 10,000 did not cover their full cost of service through user rates in their most recent fiscal year. Furthermore, about half of these utilities did not regularly increase their user rates; rather, they raised their user rates infrequently—once, twice, or not at all— from 1992 to 2001. Utility officials and water industry organizations also note that utilities may have to respond to governing bodies’ interests rather than to the long-term plan they developed using comprehensive asset management. For instance, while the Orange County Sanitation District’s governing board has supported comprehensive asset management, it overrode utility plans for some capital projects and instead funded a $500 million secondary sewage treatment plant, which was not a utility priority. The board took this action in response to public concerns that the operating sewage plant was inadequate and had contaminated the water. A subsequent report showed, however, that the contamination more than likely did not result from an inadequate treatment plant. However, the utility will probably have to defer other priorities in order to design and build this new facility. In addition, the governing body may shift funding originally budgeted to implement the next phase of Orange County’s asset management program to fund the new plant. Several industry officials also pointed out that governing bodies for municipally owned utilities tend to make financial decisions about their drinking water and wastewater utilities in light of competing local needs that may be a higher priority for the electorate. One industry official also reported that locally elected officials tend to focus their efforts on short- term, more visible projects, while utility managers must focus on sustaining the utility’s operation in the long term. For example, a utility’s governing body may decide to forgo infrastructure repairs in order to build a new school or baseball field. Smaller utilities can also benefit from the improved data, coordination, and informed decision making that result from asset management. Although small utilities represent a substantial portion of the water and wastewater industry, officials recognize that these utilities may have more difficulty implementing asset management because they typically have fewer financial, technological, and staff resources. In addition, EPA has reported that small systems are less likely to cover their full cost of providing services because they have to spread their fixed infrastructure costs over a smaller customer base. However, EPA believes that comprehensive asset management will enable smaller systems to increase knowledge of their system, make more informed financial decisions, reduce emergency repairs, and set better priorities for rehabilitation and replacement. Even the most rudimentary aspects of asset management can produce immediate benefits for small communities. For example, the Somersworth, New Hampshire, Department of Public Works and Utilities avoided a ruptured sewer main because it had collected data through its asset management initiative that mapped the location of critical pipelines. As a result, when a resident applied for a construction permit to build a garage, the utility determined that one critical pipeline lay in the path of the proposed construction and could rupture. Therefore, the city of Somersworth denied the permit. Similarly, the Department of Public Works in Denton, Maryland, which provides both drinking water and wastewater services, obtained positive results from applying asset management concepts without having to invest in sophisticated software or perform a complicated analysis. In this case, Denton’s city council was apprehensive about investing in new trucks for the utility even though some of the existing trucks were in poor condition. Council members believed that it would be less expensive to continue repairing the existing fleet. However, using data collected through their asset management initiative, utility managers were able to track the maintenance and depreciation costs associated with these vehicles. As a result, they could demonstrate to their governing body that it was more cost-effective to purchase new vehicles than to continue repairing the older trucks. Because smaller utilities have fewer capital assets to manage, industry officials noted that these utilities can implement asset management by turning to low-cost alternatives that do not require expensive or sophisticated technology. The small utilities can implement asset management using their existing asset data and recording this information in a central location that can be accessed by all of its employees, such as a set of index cards or an Excel spreadsheet. Similarly, the utility can adopt the practices of asset management incrementally, by initially making asset decisions based on their existing data. Opportunities exist for EPA to encourage water utilities’ use of asset management by strengthening existing initiatives. Currently, EPA sponsors several initiatives to promote the use of asset management, such as training and informational materials, technical assistance, and research. While this is a good first step, the entities involved in these initiatives are not systematically sharing information within and across the drinking water and wastewater programs. With better coordination, however, EPA could leverage limited resources and reduce the potential for duplication within the agency. EPA could supplement its own efforts to disseminate information on asset management by taking advantage of similar efforts by other federal agencies, such as the Department of Transportation. Water industry officials also see a role for EPA in educating utility managers about how asset management can be a tool to help them meet regulatory requirements related to utility management. However, the officials raised concerns about the implications of mandating asset management as proposed in legislation being considered by the Congress. Through partnerships with water industry associations and universities, EPA has supported the development of training and informational materials to help drinking water and wastewater utilities implement asset management. In particular, EPA contributed funding toward the development of a comprehensive industry handbook on asset management, which was published in 2002 under a cooperative agreement with the Association of Metropolitan Sewerage Agencies. The handbook lays out the principles of asset management and describes how utilities can use this approach to improve decision making, reduce costs, and ensure the long- term, high-level performance of their assets. EPA has also sponsored materials specifically directed at small utilities. For small drinking water systems, EPA’s Office of Ground Water and Drinking Water published a handbook in 2003 that describes the basic concepts of asset management and provides information on how to develop an asset management plan. In addition, to help entities such as mobile home parks and homeowners’ associations that own and operate their own water systems, the office is developing a booklet on preparing a simple inventory of the systems’ assets and assessing their condition. EPA’s Office of Wastewater Management is funding the development of a “toolkit” by a university-based training center to help small wastewater utilities implement asset management. The toolkit is currently being field tested and is scheduled for release in 2006. Among other things, it includes self-audit instruments to help utility managers to analyze their systems’ needs, training materials, and a summary of lessons learned in the field. In addition to various informational materials on asset management, EPA has sponsored a number of training and technical assistance programs. For example, the Office of Wastewater Management, along with representatives from a major utility and an engineering firm, developed a 2-day seminar on asset management, which will be held at several locations around the country during fiscal year 2004. For smaller drinking water and wastewater utilities, EPA funds state and university-based centers that provide training and technical assistance to small utilities on a variety of matters, including asset management. Specifically EPA’s Office of the Chief Financial Officer funds nine university-based “environmental finance centers” that assist local communities in seeking financing for environmental facilities, including municipal drinking water and wastewater utilities. In fiscal year 2003, the nine centers shared a total of $2 million in funding from the Office of the Chief Financial Officer; some centers also receive funds from EPA program offices for specific projects. According to an official in EPA’s Office of Ground Water and Drinking Water, at least three of the finance centers have efforts related to asset management planned or underway to benefit drinking water utilities. For example, the centers at Boise State University and the University of Maryland provide on-site and classroom training on establishing an asset inventory; collecting data on the age, useful life, and value of capital assets; recordkeeping; financing; and setting rates high enough to cover the full cost of service. Regarding the latter topic, Boise State’s finance center developed a simplified software program, called CAPFinance, which can help smaller systems collect and analyze the data they need in order to set adequate user rates; much of this information can be used to create a rudimentary asset management program. Another eight university-based technical assistance centers receive funding under the Safe Drinking Water Act to help ensure that small drinking water systems have the capacity they need to meet regulatory requirements and provide safe drinking water. In fiscal year 2003, the eight centers shared about $3.6 million in funding from the Office of Ground Water and Drinking Water. According to an official from that office, three of the centers are holding workshops or developing guidance manuals that focus on sustaining the financial viability of small systems in some way; the official believes that much of this material is relevant to implementing asset management. The Office of Wastewater Management funds 46 state and university- based environmental training centers under the Clean Water Act to train wastewater utility officials on financial management, operations and maintenance, and other topics. According to an official with EPA’s wastewater program, one of the 46 centers is developing a series of six training courses to help small wastewater utilities implement some of the basic elements of asset management, such as inventorying system assets and assessing their condition. Once this effort is completed, the center will disseminate the course materials to the remaining 45 centers so that staff from the other centers will be able to teach the asset management courses to operators of small wastewater utilities across the country. EPA has also funded research projects related to asset management. For example, one project—sponsored by EPA, the Water Environment Federation, and the Association of Metropolitan Sewerage Agencies— examined the interrelationship between asset management and other management initiatives, such as environmental management systems, that have received some attention within the water industry. The project found that to varying degrees, the initiatives share a common focus on continuous improvement through self-assessment, benchmarking, and the use of best practices and performance measures. The final report, issued in September 2002, concluded that while the initiatives overlap substantially, they are generally compatible. EPA also contributed $75,000 toward a 2002 report by the Water Environment Research Foundation, which summarized the results of a 2-day workshop held to develop a research agenda for asset management. Workshop participants, who included utility managers, regulators, and industry consultants, identified areas in which they need improved tools and technical approaches, established criteria for evaluating asset management research needs, and identified and set priorities for specific research projects. According to the foundation’s report, the workshop ultimately recommended 11 research projects, 2 of which will get underway in 2004. EPA is contributing $200,000 to one of these projects, which will develop protocols for assessing the condition and performance of infrastructure assets and predictive models for correlating the two. The foundation will fund the second project, which is scheduled to begin in March 2004, and will develop guidance on strategic planning for asset management. According to EPA, the second project will also develop a Web-based collection of best practices on asset management; utilities will be able to purchase licenses to gain access to the materials. The remaining research projects identified in the workshop highlight the need for practical tools to help utilities implement the most fundamental aspects of asset management. They include projects to establish methodologies for determining asset value, compiling inventories, and capturing and compiling information on the assets’ attributes; develop methodologies for calculating life-cycle costs for infrastructure construct predictive models for infrastructure assets that project life- cycle costs and risks; identify best practices for operating and maintaining infrastructure assets by asset category, condition, and performance requirements; and identify best practices for integrating water and wastewater utility databases. In addition, workshop participants recommended a project to assess the feasibility of establishing an Asset Management Standards Board for the drinking water and wastewater industry. EPA could build on its efforts to promote asset management at drinking water and wastewater utilities by better coordinating ongoing and planned initiatives in the agency’s drinking water and wastewater programs. In addition, EPA could leverage the efforts of other federal agencies, such as the Department of Transportation, that have more experience in promoting asset management as well as informational materials and tools that could potentially be useful as EPA and the water industry develop similar materials. While some of EPA’s efforts to promote the use of asset management, such as sponsoring the comprehensive industry handbook, have involved both the drinking water and wastewater communities, it appears that other efforts are occurring with little coordination between the drinking water and wastewater programs or other offices within EPA. For example, the Office of the Chief Financial Officer, the Office of Ground Water and Drinking Water, and the Office of Wastewater Management have funded parallel but separate efforts to develop handbooks, software, or other training materials to help small drinking water and wastewater utilities implement asset management or related activities such as improving financial viability. According to our interviews with EPA officials and representatives of the university-based training and technical assistance centers, no central repository exists for EPA to track what the university- based centers are doing and ensure that they have the information they need to avoid duplication and take advantage of related work done by others. The centers that share information do so primarily within their own network, as in the case of the environmental finance centers, or share information on an ad hoc basis. As a result, the centers are likely to miss some opportunities to exchange information. Similarly, the drinking water and wastewater program offices do not regularly exchange information on what they or their centers are doing to develop informational materials, training, or technical assistance on asset management. EPA officials explained that, to some extent, the organizational framework within which the centers operate contributes to limited information sharing and duplication of effort. As a result, EPA is not maximizing the resources it devotes to encouraging utilities’ use of asset management. In the case of the environmental finance centers, for example, each one negotiates a work plan with the EPA regional office it serves. Although EPA headquarters also has some influence over what the centers work on, the centers primarily focus on regional priorities and work with the states within the regional office’s jurisdiction. Occasionally, EPA’s drinking water and wastewater program offices fund projects at the environmental finance centers that are independent of their regional work plans. For example, the drinking water program provided some funds to the center at Boise State to develop an evaluation tool that states can use to assess utilities’ qualifications for obtaining financial assistance from state revolving loan funds. For the most part, however, the training and technical assistance centers operate autonomously and do not have a formal mechanism for regularly exchanging information among the different center networks or between the drinking water and wastewater programs. EPA has not taken advantage of the guidance, training, and implementation tools available from other federal agencies, which would help EPA leverage its resources. For the purposes of our review, we focused on the Department of Transportation’s Federal Highway Administration because it has been involved in promoting asset management for about a decade and has been at the forefront of developing useful tools and training materials. In 1999, the Federal Highway Administration established an Office of Asset Management to develop tools and other materials on asset management and encourage state transportation agencies to adopt asset management programs and practices. According to officials within the Office of Asset Management, the basic elements of asset management are the same regardless of the type of entity responsible for managing the assets or the type of assets being managed. Simply put, every organization needs to know the assets it has, their condition, how they are performing, and the costs and benefits of alternatives for managing the assets. Over the years, the Office of Asset Management has published several guidance documents on asset management and its basic elements. While the purpose of the guidance was to assist state transportation agencies, Transportation officials believe that the general principles contained in their publications are universally applicable. The office’s guidance includes, for example, a general primer on the fundamental concepts of asset management; a primer on data integration that lays out the benefits of and tools for integrating data, the steps to follow in linking or combining large data files, potential obstacles to data integration and ways to overcome them, and experiences of agencies that have integrated their data; and a primer on life-cycle cost analysis that provides information on how to apply this methodology for comparing investment alternatives and describes uncertainties regarding when and how to use life-cycle cost analysis and what assumptions should be made during the course of the analysis. Transportation’s Office of Asset Management has also developed a software program to assist states in estimating how different levels of investment in highway maintenance will affect both user costs and the highways’ future condition and performance. In addition, to disseminate information on asset management, the office established a Web site that includes its most recent tools and guidance and links to external Web sites with related asset management information, including a link to an asset management Web site jointly sponsored with the American Association of State Highway and Transportation Officials. As EPA began its efforts to explore the potential of comprehensive asset management to help address utility infrastructure needs, officials from the Office of Water met with staff from Transportation’s Office of Asset Management and obtained a detailed briefing on its asset management program. Although EPA officials expressed concerns about having relatively limited resources to promote asset management, they have so far not pursued a closer relationship with Transportation or other federal agencies with experience in the field. For example, EPA may find opportunities to adapt Transportation’s guidance materials or use other efforts, such as a Web site that brings together asset management information from diverse sources, as a model for its own initiatives. Water industry officials support a greater role for EPA in promoting asset management, both as a tool for better managing infrastructure and for helping drinking water and wastewater utilities meet existing or proposed regulatory requirements. However, they stopped short of endorsing legislative proposals that would require utilities to develop and implement plans for maintaining, rehabilitating, and replacing capital assets, often as a condition of obtaining loans or other financial assistance. To obtain views on the role that EPA might play in encouraging the use of asset management, we talked with officials from water industry associations and the 15 utilities that we selected for structured interviews. With few exceptions, the officials agreed that EPA should be promoting asset management in some way, although opinions varied on what activities would be most appropriate. One of the options that garnered the support of many was a greater leadership role for EPA in promoting the use of asset management. For example, 11 of the 15 utilities indicated that based on their own experience, asset management can help utilities comply with certain regulatory requirements that focus in whole or in part on the adequacy of utility infrastructure and the management practices that affect it. While EPA recognizes the link between asset management and regulatory compliance—and has noted the connection in some agency publications and training—some utility officials believe that EPA should increase its efforts in this regard. As examples of regulatory requirements for which asset management is particularly germane, officials from industry associations and individual utilities cited both the existing “capacity development” requirements under EPA’s drinking water program and regulations for capacity, management, operation, and maintenance under consideration in the wastewater program, as follows: Capacity development requirements for drinking water utilities. To be eligible for full funding under the Safe Drinking Water Act’s State Revolving Fund program, state regulatory agencies are required to have strategies to assist drinking water utilities in acquiring and maintaining the financial, managerial, and technical capacity to consistently provide safe drinking water. To assess capacity, states evaluate, among other things, the condition of the utilities’ infrastructure, the adequacy of maintenance and capital improvement programs, and the adequacy of revenues from user rates to cover the full cost of service. Drinking water utilities that are determined to lack capacity are not eligible for financial assistance from the revolving loan fund. Capacity, management, operation, and maintenance requirements for wastewater utilities. As part of its wastewater management program under the Clean Water Act, EPA is considering regulations designed to improve the performance of treatment facilities and protect the nation's collection system infrastructure by enhancing and maintaining system capacity (i.e., peak wastewater flows), reducing equipment and operational failures, and extending the life of sewage treatment equipment. Among other things, wastewater utilities would be required to prepare capacity, management, operation, and maintenance plans for their operations. The regulations would also require utilities to assess the condition of their physical infrastructure and determine which components need to be repaired or replaced. According to industry officials, implementing asset management is consistent with meeting these requirements, and it enhances utilities’ ability to comply with them. For the requirements being considered for wastewater utilities, for example, EPA has concluded that three basic components are a facility inventory, a condition assessment, and asset valuation—all of which are important elements of asset management. Consequently, the officials believe that it makes sense for EPA to place more emphasis on the use of comprehensive asset management. Some water industry officials also told us that EPA should use the relationship between asset management practices and the financial reporting requirements under Governmental Accounting Standards Board Statement 34 as a means of promoting the use of asset management. Under these new requirements, state and local governments are required to report information about public infrastructure assets, including their drinking water and wastewater facilities. Specifically, the governments must either report depreciation of their capital assets or implement an asset management system. Given the infrastructure-related regulatory requirements and utilities’ other concerns about the condition of their assets, it is not surprising that 11 of the 15 utilities we interviewed in depth saw a need for EPA to set up a clearinghouse of information on comprehensive asset management. Several utilities suggested that EPA establish a Web site that would serve as a central repository of such information. This site could provide drinking water and wastewater utilities with direct and easy access to information that would help them better manage their infrastructure. For example, the Web site could gather in one place the guidance manuals, tools, and training materials developed by EPA or funded through research grants and its training and technical assistance centers. The site could also contain links to asset management tools and guidance developed by domestic and international water associations or other federal agencies, such as Transportation’s Office of Asset Management. Several officials also commented that it might be useful to have a site where drinking water and wastewater utilities could share lessons learned from implementing asset management. Other utilities also supported the idea of a Web site, but were uncertain about whether EPA was the appropriate place for it. In commenting on a draft of this report, EPA generally agreed that an EPA Web site devoted to asset management would be worthwhile and is considering developing such a site. In recent years, the Congress has considered several legislative proposals that would, in part, promote the use of asset management in some way. These proposals generally call for an inventory of existing capital assets; some type of plan for maintaining, repairing, and replacing the assets; and a plan for funding such activities. All but one of the proposals made having the plans a condition of obtaining federal financial assistance. The proposals are consistent with what we have found to be the leading practices in capital decision making. As we reported in 1998, for example, routinely assessing the condition of assets allows managers to evaluate the capabilities of existing assets, plan for future replacements, and calculate the cost of deferred maintenance. However, according to key stakeholders, implementing and enforcing requirements for asset management could be problematic at this time. We asked water industry groups, associations of state regulators, and individual utilities for their views on the proposed mandate of asset management plans. While most of them endorse asset management, they raised several concerns about a statutory requirement. For example: Officials from water industry associations believe that drinking water and wastewater utilities are already overburdened by existing regulatory requirements and that many utilities lack the resources to meet an additional requirement for developing asset management plans. The Association of State Drinking Water Administrators and the Association of State and Interstate Water Pollution Control Administrators both said that the states lack the resources to oversee compliance and determine the adequacy of asset management plans. Both the state and industry associations questioned the feasibility of defining what would constitute an adequate plan. Officials at 12 of the 15 utilities where we conducted in-depth interviews had serious reservations about a requirement. For example, some utility managers were concerned that EPA and the states would attempt to standardize asset management and limit the flexibility that utilities need to tailor asset management to their own circumstances. Another concern was that the states lack financial and technical resources and thus are ill equipped to determine whether utilities’ asset management plans are adequate. Finally, some utility officials also questioned the burden that such a requirement would place on small utilities. Other utility officials either support a requirement or support the concept of asset management but question whether mandating such a requirement is an appropriate role for the federal government. One of the officials commented that whether or not asset management is required, utilities should manage their infrastructure responsibly and charge rates sufficient to cover the full cost of service. The National Association of Water Companies, which represents investor-owned utilities, supports a requirement for asset management to ensure that public water and wastewater utilities are operating efficiently and are charging rates that cover the full cost of service. Comprehensive asset management shows real promise as a tool to help drinking water and wastewater utilities better identify and manage their infrastructure needs. Even with their limited experience to date, water utilities reported that they are already achieving significant benefits from asset management. EPA clearly recognizes the potential of this management tool to help ensure a sustainable water infrastructure and has sponsored a number of initiatives to support the development of informational materials and encourage the use of asset management. However, in an era of limited resources, it is particularly important for EPA to get the most out of its investments by coordinating all of the asset management-related activities sponsored by the agency and taking advantage of tools and training materials developed by others—including domestic and international industry associations and other federal agencies with experience in asset management. Establishing a central repository of all asset management-related activities could not only foster more systematic information sharing but also help minimize the potential for duplication and allow EPA-sponsored training and technical assistance centers to build on each other’s efforts. As EPA has recognized, improving utilities’ ability to manage their infrastructure cannot help but improve their ability to meet regulatory requirements that focus on the adequacy of utility infrastructure and management practices. Consequently, it is in the agency’s best interest to disseminate information on asset management and promote its use. Establishing a Web site, perhaps as part of the repository, would help ensure that such information is accessible to water utilities and that EPA is getting the most use out of the materials whose development it funded. Moreover, EPA could use the site as a means of strengthening its efforts to educate utility managers on the connection between effectively managing capital assets and the ability to comply with relevant requirements under the Safe Drinking Water Act and Clean Water Act. Given the potential of comprehensive asset management to help water utilities better identify and manage their infrastructure needs, the Administrator, EPA, should take steps to strengthen the agency’s existing initiatives on asset management and ensure that relevant information is accessible to those who need it. Specifically, the Administrator should better coordinate ongoing and planned initiatives to promote comprehensive asset management within and across the drinking water and wastewater programs to leverage limited resources and reduce the potential for duplication; explore opportunities to take advantage of asset management tools and informational materials developed by other federal agencies; strengthen efforts to educate utilities on how implementing asset management can help them comply with certain regulatory requirements that focus in whole or in part on the adequacy of utility infrastructure and the management practices that affect it; and establish a Web site to provide a central repository of information on comprehensive asset management so that drinking water and wastewater utilities have direct and easy access to information that will help them better manage their infrastructure. | Having invested billions of dollars in drinking water and wastewater infrastructure, the federal government has a major interest in protecting its investment and in ensuring that future assistance goes to utilities that are built and managed to meet key regulatory requirements. The Congress has been considering, among other things, requiring utilities to develop comprehensive asset management plans. Some utilities are already implementing asset management voluntarily. The asset management approach minimizes the total cost of buying, operating, maintaining, replacing, and disposing of capital assets during their life cycles, while achieving service goals. This report discusses (1) the benefits and challenges for water utilities in implementing comprehensive asset management and (2) the federal government's potential role in encouraging utilities to use it. Drinking water and wastewater utilities that GAO reviewed reported benefiting from comprehensive asset management but also finding certain challenges. The benefits include (1) improved decision making about their capital assets and (2) more productive relationships with governing authorities, rate payers, and others. For example, utilities reported that collecting accurate data about their assets provides a better understanding of their maintenance, rehabilitation, and replacement needs and thus helps utility managers make better investment decisions. Among the challenges to implementing asset management, utilities cited collecting and managing needed data and making the cultural changes necessary to integrate information and decision making across departments. Utilities also reported that the shorter-term focus of their governing bodies can hamper long-term planning efforts. EPA currently sponsors initiatives to promote the use of asset management, including educational materials, technical assistance, and research. While this is a good first step, GAO found that EPA could better coordinate some activities. For example, EPA has no central repository to facilitate information sharing within and across its drinking water and wastewater programs, which would help avoid duplication of effort. Water industry officials see a role for EPA in promoting asset management as a tool to help utilities meet infrastructure-related regulatory requirements; they also noted that establishing an EPA Web site would be useful for disseminating asset management information to utilities. The officials raised concerns, however, about the implications of mandating asset management, citing challenges in defining an adequate asset management plan and in the ability of states to oversee and enforce compliance. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
In September 2003, DOD finalized its economic analysis for DTS in preparation for a milestone decision review. The highlights of the economic analysis are shown in table 1. In December 2003, the DOD Chief Information Officer granted approval for DTS to proceed with full implementation throughout the department. Our analysis of the September 2003 DTS economic analysis found that two key assumptions used to estimate cost savings were not based on reliable information. Consequently, the economic analysis did not serve to help ensure that the funds invested in DTS were used in an efficient and effective manner. Two primary areas—personnel savings and reduced CTO fees—represented the majority of the over $56 million of estimated annual net savings DTS was expected to realize. However, the estimates used to generate these savings were unreliable. Further, DOD did not effectively implement the policies relating to developing economic analyses for programs such as DTS. Effective implementation of these policies should have highlighted the problems that we found and allowed for appropriate adjustments so that the economic analysis could have served as a useful management tool in making funding decisions related to DTS—which is the primary purpose of this analysis. While the department’s system acquisition criteria do not require that a new economic analysis be prepared, the department’s business system investment management structure provides an opportunity for DOD management to assess whether DTS is meeting its planned cost, schedule, and functionality goals. The economic analysis estimated that the annual personnel savings was over $54 million, as shown in table 2. As shown in table 2, approximately 45 percent of the estimated savings, or $24.2 million, was attributable to the Air Force and Navy. The assumption behind the personnel savings computation was that there would be less manual intervention in the processing of travel vouchers for payment, and therefore fewer staff would be needed. However, based on our discussions with Air Force and Navy DTS program officials, it is questionable as to how the estimated savings will be achieved. Air Force and Navy DTS program officials stated that they did not anticipate a reduction in the number of personnel with the full implementation of DTS, but rather the shifting of staff to other functions. According to DOD officials responsible for reviewing economic analyses, while shifting personnel to other functions is considered a benefit, it should be considered an intangible benefit rather than tangible dollar savings since the shifting of personnel does not result in a reduction of DOD expenditures. Also, as part of the Navy’s overall evaluation of the economic analysis, program officials stated that “the Navy has not identified, and conceivably will not recommend, any personnel billets for reduction.” Finally, the Naval Cost Analysis Division (NCAD) October 2003 report on the economic analysis noted that it could not validate approximately 40 percent of the Navy’s total costs, including personnel costs, in the DTS life-cycle cost estimates because credible supporting documentation was lacking. The report also noted that the PMO-DTS used unsound methodologies in preparing the DTS economic analysis. The extent of personnel savings for the Army and defense agencies, which are reported as $16 million and $6.3 million respectively, is also unclear. The Army and many defense agencies use the Defense Finance and Accounting Service (DFAS) to process their travel vouchers, so the personnel savings for the Army and the defense agencies were primarily related to reductions in DFAS’s costs. In discussions with DFAS officials, they were unable to estimate the actual personnel savings that would result since they did not know (1) the number of personnel, like those at the Air Force and Navy, that would simply be transferred to other DFAS functions or (2) the number of personnel that could be used to avoid additional hiring. For example, DFAS expects that some of the individuals assigned to support the travel function could be moved to support its ePayroll program. Since these positions would need to be filled regardless of whether the travel function is reduced, transferring personnel from travel to ePayroll would reduce DOD’s overall costs since DFAS would not have to hire additional individuals. DOD strongly objected to our finding that the personnel savings are unrealistic. In its written comments, the department stated that it is facing an enormous challenge and continues to identify efficiencies and eliminate redundancies to help leverage available funds. We fully recognize that the department is attempting to improve the efficiency and effectiveness of its business operations. The Comptroller General of the United States testified in August 2006 that increased commitment by the department to address DOD’s numerous challenges represents an improvement over past efforts. The fact remains, however, that the results of an economic analysis are intended to help management decide if future investments in a given endeavor are worthwhile. In order to provide management with this information, it is imperative that the underlying assumptions in an economic analysis be supported by valid assumptions. The September 2003 economic analysis noted that personnel savings of $54.1 million would be realized by the department annually for fiscal years 2009 through 2016. However, based on our review and analysis of documentation and discussion with department personnel, we found that the underlying assumptions in support of the $54.1 million were not valid, particularly in regard to the amounts estimated for the Navy and Air Force. For example, we agree with the statements of DOD officials who indicated that the shifting of personnel to other functions cannot be counted towards tangible dollar savings, since such actions do not result in a reduction of DOD expenditures. Moreover, the department did not provide any new data or related documentation in its comments that were counter to our finding. As a result of these factors, we continue to believe that the estimated annual personnel savings of $54.1 million are unrealistic. According to the September 2003 economic analysis, DOD expected to realize annual net savings of $31 million through reduced fees paid to the CTOs because the successful implementation of DTS would enable the majority of airline tickets to be acquired with either no or minimal intervention by the CTOs. These are commonly referred to as “no touch” transactions. However, DOD did not have a sufficient basis to estimate the number of transactions that would be considered “no touch” since the (1) estimated percentage of transactions that can be processed using the “no touch” was not supported and (2) analysis did not properly consider the effects of components that use management fees, rather than transaction fees, to compensate the CTOs for services provided. The weaknesses we identified with the estimating process raise serious questions as to whether DOD will realize substantial portions of the estimated annual net savings of $31 million. DOD arrived at the $31 million of annual savings in CTO fees by estimating that 70 percent of all DTS airline tickets would be considered “no touch” and then multiplying these tickets by the savings per ticket in CTO fees. However, a fundamental flaw in this analysis was that the 70 percent assumption had no solid basis. We requested, but the PMO-DTS could not provide, any analysis of travel data to support the assertion. Rather, the sole support provided by the PMO-DTS was an article in a travel industry trade publication. The article was not based on information related to DTS, but rather on the experience of one private sector company. The economic analysis assumed that DOD could save about $13.50 per “no touch” ticket. Since that analysis, DOD has awarded one contract that specifically prices transactions using the same model as that envisioned by the economic analysis. This contract applies to the Defense Travel Region 6 travel area. During calendar year 2005, the difference in fees for “no touch” transactions and the transactions supported by the current process averaged between $10 and $12, depending on when the fees were incurred because the contract rates changed during 2005. In analyzing travel voucher data for Region 6 for calendar year 2005, we found that the reported “no touch” rate was, at best 47 percent—far less than the 70 percent envisioned in the economic analysis. PMO-DTS program officials stated they are uncertain as to why the anticipated 70 percent “no touch” was not being achieved. According to PMO-DTS program officials, this could be attributed, in part, to the DOD travelers being uncomfortable with the system and with making reservations without using a CTO. Although this may be one reason, other factors may also affect the expected “no touch” fee. For example, we were informed that determining the airline availability and making the associated reservation can be accomplished, in most cases, rather easily. However, obtaining information related to hotels and rental cars and making the associated reservation can be more problematic because of the limitations in the data that DTS is able to obtain from its commercial sources. Accordingly, while a traveler may be able to make a “no touch” reservation for the airline portion of the trip, the individual may need to contact the CTO in order to make hotel or rental car reservations. When this occurs, rather than paying a “no touch” fee to the CTO, DOD ends up paying a higher fee, which eliminates the savings estimated in the economic analysis. The economic analysis assumed that (1) DOD would be able to modify the existing CTO contracts to achieve a substantial reduction in fees paid to a CTO when DTS was fully implemented across the department and (2) all services would use the fee structure called for in the new CTO contracts. The first part of the assumption is supported by results of the CTO contract for DOD Region 6 travel. The fees for the DTS “no touch” transactions were at least $10 less than if a CTO was involved in the transactions. However, to date, the department has experienced difficulty in awarding new contracts with the lower fee structure. On May 10, 2006, the department announced the cancellation of the solicitation for a new contract. According to the department, it decided that the solicitation needed to be rewritten based on feedback from travel industry representatives at a March 28, 2006, conference. The department acknowledged that the “DTS office realized its solicitation didn’t reflect what travel agency services it actually needed.” The department would not say how the solicitation would be refined, citing the sensitivity of the procurement process. The department also noted that the new solicitation would be released soon, but provided no specific date. The economic analysis assumed that the Navy would save about $7.5 million, almost 25 percent, of the total savings related to CTO fees once DTS is fully deployed. The economic analysis averaged the CTO fees paid by the Army, the Air Force, and the Marine Corps—which amounted to about $18.71 per transaction—to compute the savings in Navy CTO fees. Using these data, the assumption was made in the economic analysis that a fee of $5.25 would be assessed for each ticket, resulting in an average savings of $13.46 per ticket for the Navy ($18.71 minus $5.25). While this approach may be valid for the organizations that pay individual CTO fees, it may not be representative for organizations such as the Navy that pay a management fee. The management fee charged the Navy is the same regardless of the involvement of the CTO—therefore, the reduced “no touch” fee would not apply. We were informed by Navy DTS program officials that they were considering continuing the use of management fees after DTS is fully implemented. According to Navy DTS program officials, they paid about $14.5 million during fiscal year 2005 for CTO management fees, almost $19 per ticket for approximately 762,700 tickets issued. Accordingly, even if the department arrives at a new CTO contract containing the new fee structure or fees similar to those of Region 6, the estimated savings related to CTO fees for the Navy will not be realized if the Navy continues to use the management fee concept. Effective implementation of DOD guidance would have detected the types of problems discussed above and resulted in an economic analysis that would have accomplished the stated objective of the process—to help ensure that the funds invested in DTS were used efficiently and effectively. DOD policy and OMB guidance require that an economic analysis be based on facts and data and be explicit about the underlying assumptions used to arrive at estimates of future benefits and costs. Since an economic analysis deals with costs and benefits occurring in the future, assumptions must be made to account for uncertainties. DOD policy recognizes this and provides a systematic approach to the problem of choosing the best method of allocating scarce resources to achieve a given objective. A sound economic analysis recognizes that there are alternative ways to meet a given objective and that each alternative requires certain resources and produces certain results. The purpose of the economic analysis is to give the decision maker insight into economic factors bearing on accomplishing the objectives. Therefore, it is important to identify factors, such as cost and performance risks and drivers, that can be used to establish and defend priorities and resource allocations. The DTS economic analysis did not comply with the DOD policy, and the weaknesses we found should have been detected had the DOD policy been effectively implemented. The PMO-DTS had adequate warning signs of the potential problems associated with not following the OMB and DOD guidance for developing an effective economic analysis. For example, as noted earlier, the Air Force and Navy provided comments when the economic analysis was being developed that the expected benefits being claimed were unrealistic. Just removing the benefits associated with personnel savings from the Air Force and Navy would have reduced the overall estimated program cost savings by almost 45 percent. This would have put increased pressure on the credibility of using a 70 percent “no touch” utilization rate. Specific examples of failures to effectively implement the DOD policy on conducting economic analyses include the (1) DTS life-cycle cost estimates portion of the economic analysis was not independently validated as specified in DOD’s guidance and (2) September 2003 DTS economic analysis did not undertake an assessment of the effects of the uncertainty inherent in the estimates of benefits and costs, as required by DOD and OMB guidance. Because an economic analysis uses estimates and assumptions, it is critical that a sensitivity analysis be performed to understand the effects of the imprecision in both underlying data and modeling assumptions. Our September 2005 testimony and January 2006 report noted the challenge facing the department in attaining the anticipated DTS utilization. While DOD has acknowledged the underutilization, we found that, across DOD, the department does not have reasonable quantitative metrics to measure the extent to which DTS is actually being used. Presently, the reported DTS utilization is based on a DTS Voucher Analysis Model that was developed in calendar year 2003 using estimated data, but over the years has not been completely updated with actual data. While the military services have initiated actions to help increase the utilization of DTS, they pointed out that ineffective DTS training is a contributing factor to the lower than expected usage rate by the military services. The DTS Voucher Analysis Model was prepared in calendar year 2003 and based on airline ticket and voucher count data that were reported by the military services and defense agencies, but the data were not verified or validated. Furthermore, PMO-DTS officials acknowledged that the model has not been completely updated with actual data as DTS continues to be implemented at the 11,000 sites. We found that the Air Force is the only military service that submits monthly metrics to the PMO-DTS officials for their use in updating the DTS Voucher Analysis Model. Rather than reporting utilization based on individual site system utilization data, the PMO-DTS continues to rely on outdated information in the reporting of DTS utilization to DOD management and the Congress. We have previously reported that best business practices indicate that a key factor of project management and oversight is the ability to effectively monitor and evaluate a project’s actual performance against what was planned. In order to perform this critical task, best business practices require the adoption of quantitative metrics to help measure the effectiveness of a business system implementation and to continually measure and monitor results, such as system utilization. This lack of accurate and pertinent utilization data hinders management’s ability to monitor its progress toward the DOD vision of DTS as the standard travel system, as well as to provide consistent and accurate data to Congress. With the shift of the DTS program to the Business Transformation Agency (BTA), which now makes DTS an enterprisewide endeavor, improved metrics and training are essential if DTS is to be DOD’s standard, integrated, end-to-end travel system for business travel. DTS’s reported utilization rates for the period October 2005 through April 2006 averaged 53 percent for Army, 30 percent for Navy, and 39 percent for Air Force. Because the PMO-DTS was not able to identify the total number of travel vouchers that should have been processed through DTS (total universe of travel vouchers), these utilization rates may be over- or understated. PMO-DTS program officials confirmed that the reported utilization data were not based on complete data because the department did not have comprehensive information to identify the universe or the total number of travel vouchers that should be processed through DTS. PMO-DTS program and DTS military service officials agreed that the actual DTS utilization rate should be calculated by comparing actual vouchers being processed in DTS to the total universe of vouchers that should be processed in DTS. The universe would exclude those travel vouchers that cannot be processed through DTS, such as those related to permanent change of station travel. The Air Force was the only military service that attempted to obtain data on (1) the actual travel vouchers processed through DTS and (2) those travel vouchers that were eligible to be processed through DTS, but were not. These data were site-specific. For example, during the month of December 2005, the PMO-DTS reported that at Wright-Patterson Air Force Base, 2,880 travel vouchers were processed by DTS, and the Air Force reported that another 2,307 vouchers were processed through the legacy system—the Reserve Travel System (RTS). Of those processed through RTS, Air Force DTS program officials stated that 338 travel vouchers should have been processed through DTS. DTS Air Force program officials further stated that they submitted to the PMO-DTS the number of travel vouchers processed through RTS each month. These data are used by the PMO-DTS to update the DTS Voucher Analysis Model. However, neither the Air Force nor the PMO-DTS have verified the accuracy and reliability of the data. Therefore, the accuracy of the utilization rates reported for the Air Force by the PMO-DTS is not known. Because Army and Navy DTS program officials did not have the information to identify the travel transactions that should have been processed through DTS, the Army and Navy did not have a basis for evaluating DTS utilization at their respective military locations and activities. Furthermore, Navy DTS program officials indicated that the utilization data that the PMO-DTS program officials reported for the Navy were not accurate. According to Navy DTS program officials, the Navy’s primary source of utilization data was the monthly metrics reports provided by the PMO-DTS, but Navy DTS program officials questioned the accuracy of the Navy utilization reports provided by the PMO-DTS. Although the military services have issued various memorandums aimed at increasing the utilization of DTS, the military service DTS program officials all pointed to ineffective training as a primary cause of DTS not being utilized to a far greater extent. The following examples highlight the concerns raised by the military service officials: Army DTS program officials emphasized that the DTS system is complex and the design presents usability challenges for users—especially for first- time or infrequent users. They added that a major concern is that there is no PMO-DTS training for existing DTS users as new functionality is added to DTS. These officials stated that the PMO-DTS does not do a good job of informing users about functionality changes made to the system. We inquired if the Help Desk was able to resolve the users’ problems, and the Army DTS officials simply stated “no.” The Army officials further pointed out that it would be beneficial if the PMO-DTS improved the electronic training on the DTS Web site and made the training documentation easier to understand. Also, improved training would help infrequent users adapt to system changes. The Army officials noted that without some of these improvements to resolve usability concerns, DTS will continue to be extremely frustrating and cumbersome for travelers. Navy DTS program officials stated that DTS lacks adequate user/traveler training. The train-the-trainer concept of training system administrators who could then effectively train all their travelers has been largely unsuccessful. According to Navy officials, this has resulted in many travelers and users attempting to use DTS with no or insufficient training. The effect has frustrated users at each step of the travel process and has discouraged use of DTS. Air Force officials stated that new DTS system releases are implemented with known problems, but the sites are not informed of the problems. Workarounds are not provided until after the sites begin encountering problems. Air Force DTS program officials stated that DTS releases did not appear to be well tested prior to implementation. Air Force officials also stated that there was insufficient training on new functionality. PMO- DTS and DTS contractor program officials believed that conference calls to discuss new functionality with the sites were acceptable training, but Air Force officials did not agree. The Air Force finance office was expected to fully comprehend the information received from those conference calls and provide training on the new functionality to users/approvers, but these officials stated that this was an unrealistic expectation. As discussed in our September 2005 testimony and January 2006 report, the unnecessary continued use of the legacy travel systems results in the inefficient use of funds because the department is paying to operate and maintain duplicative systems that perform the same function—travel. Our September 2005 testimony and January 2006 report noted problems with DTS’s ability to properly display flight information and traced those problems to inadequate requirements management and testing. DOD stated that it had addressed those deficiencies, and in February 2006, we again tested the system to determine whether the stated weaknesses had been addressed. We found that similar problems continue to exist. Once again, these problems can be traced to ineffective requirements management and testing processes. Properly defined requirements are a key element in systems that meet their cost, schedule, and performance goals since the requirements define the (1) functionality that is expected to be provided by the system and (2) quantitative measures by which to determine through testing whether that functionality is operating as expected. We briefed PMO-DTS officials on the results of our tests and in May 2006 the officials agreed that our continued concerns about the proper display of flight information were valid. PMO-DTS officials stated that the DTS technology refresh, which was to be completed in September 2006, should address some of our concerns. While these actions are a positive step forward, they do not address the fundamental problem that DTS’s requirements are still ambiguous and conflicting—a primary cause of the previous problems. Until a viable requirements management process is developed and effectively implemented, the department (1) cannot develop an effective testing process and (2) will not have reasonable assurance the project risks have been reduced to acceptable levels. In our earlier testimony and report, we noted that DOD did not have reasonable assurance that the flights displayed met the stated DOD requirements. Although DOD stated in each case that our concerns had been addressed, subsequent tests found that the problems had not been corrected. Requirements represent the blueprint that system developers and program managers use to design, develop, and acquire a system. Requirements should be consistent with one another, verifiable, and directly traceable to higher-level business or functional requirements. It is critical that requirements be carefully defined and that they flow directly from the organization’s concept of operations (how the organization’s day- to-day operations are or will be carried out to meet mission needs). Improperly defined or incomplete requirements have been commonly identified as a cause of system failure and systems that do not meet their cost, schedule, or performance goals. Requirements represent the foundation on which the system should be developed and implemented. As we have noted in previous reports, because requirements provide the foundation for system testing, significant defects in the requirements management process preclude an entity from implementing a disciplined testing process. That is, requirements must be complete, clear, and well documented to design and implement an effective testing program. Absent this, an organization is taking a significant risk that its testing efforts will not detect significant defects until after the system is placed into production. Our February 2006 analysis of selected flight information disclosed that DOD still did not have reasonable assurance that DTS displayed flights in accordance with its stated requirements. We analyzed 15 U.S. General Services Administration (GSA) city pairs, which should have translated into 246 GSA city pair flights for the departure times selected. However, we identified 87 flights that did not appear on one or more of the required listings based on the DTS requirements. For instance, our analysis identified 44 flights appearing on other DTS listings or airline sites that did not appear on the 9:00 am DTS listing even though those flights (1) met the 12-hour flight window and (2) were considered GSA city pair flights—two of the key DTS requirements the system was expected to meet. After briefing PMO officials on the results of our analysis in February 2006, the PMO-DTS employed the services of a contractor to review DTS to determine the specific cause of the problems and recommend solutions. In a March 2006 briefing, the PMO-DTS acknowledged the existence of the problems, and identified two primary causes. First, part of the problem was attributed to the methodology used by DTS to obtain flights from the Global Distribution System (GDS). The PMO-DTS stated that DTS was programmed to obtain a “limited” amount of data from GDS in order to reduce the costs associated with accessing GDS. This helps to explain why flight queries we reviewed did not produce the expected results. To resolve this particular problem, the PMO-DTS proposed increasing the amount of data obtained from GDS. Second, the PMO-DTS acknowledged that the system testing performed by the contractor responsible for developing and operating DTS was inadequate and, therefore, there was no assurance that DTS would provide the data in conformance with the stated requirements. This weakness was not new, but rather reconfirms the concerns discussed in our September 2005 testimony and January 2006 report related to the testing of DTS. While DOD’s planned actions, including a recent technology upgrade, should address several of the specific weaknesses we identified related to flight displays, they fall short of addressing the fundamental problems that caused those weaknesses—inadequate requirements management. DTS’s requirements continue to be ambiguous. For example, DOD has retained a requirement to display 25 flights for each inquiry. However, it has not determined (1) whether the rationale for that requirement is valid and (2) under what conditions flights that are not part of the GSA city pair program should be displayed. For example, we found that several DTS flights displayed to the user “overlap” other flights. Properly validating the requirements would allow DOD to obtain reasonable assurance that its requirements properly define the functionality needed and the business rules necessary to properly implement that functionality. As previously noted, requirements that are unambiguous and consistent are fundamental to providing reasonable assurance that a system will provide the desired functionality. Until DOD improves DTS requirement management practices, it will not have this assurance. Our recent report included four recommendations to improve the department’s management and oversight of DTS. We recommended that DOD (1) evaluate the cost effectiveness of the Navy continuing with the CTO management fee structure versus adopting the revised CTO fee structure, once the new contracts have been awarded, (2) develop a process by which the military services develop and use quantitative data from DTS and their individual legacy systems to clearly identify the total universe of DTS-eligible transactions on a monthly basis, (3) require the PMO-DTS to provide periodic reports on the utilization of DTS, once accurate data are available, and (4) resolve inconsistencies in DTS requirements by properly defining the functionality needed and business rules necessary to properly implement the needed functionality. DOD concurred with three and partially concurred with one of the recommendations. In regard to the recommendations with which the department concurred, it briefly outlined the actions it planned to take in addressing two of the three recommendations. For example, the department noted the difficulties in obtaining accurate utilization data from the existing legacy systems, but stated that the Office of the Under Secretary of Defense (Personnel and Readiness) and BTA will evaluate methods for reporting actual DTS utilization. Additionally, DOD noted that the Defense Travel Management Office developed and implemented a requirements change management process on May 1, 2006. In commenting on the report, the department stated that this process is intended to define requirements and track the entire life cycle of the requirements development process. While we fully support the department’s efforts to improve its management oversight of DTS’s requirements, we continue to believe that the department needs to have in place a process that provides DOD reasonable assurance that (1) requirements are properly documented and (2) requirements are adequately tested as recommended in our January 2006 report. This process should apply to all existing requirements as well as any new requirements. As discussed in this report, we reviewed in May 2006 some of the requirements that were to have followed the new requirements management process and found problems similar to those noted in our January 2006 report. Although we did not specifically review the new process, if it does not include an evaluation of existing requirements, the department may continue to experience problems similar to those we previously identified. DOD partially concurred with our recommendation to evaluate the cost effectiveness of the Navy continuing with the CTO management fee structure. However, DOD’s response indicated that the Defense Travel Management Office is currently procuring commercial travel services for DOD worldwide in a manner that will ensure evaluation of cost effectiveness for all services. If DOD proceeds with the actions outlined in its comments, it will meet the intent of our recommendation. Effective implementation of these recommendations as well as those included in our January 2006 report will go a long way towards improving DTS functionality and increasing utilization. Furthermore, the shift of DTS to the BTA, which makes DTS an enterprisewide endeavor, should help in making DTS the standard integrated, end-to-end travel system for business travel. Management oversight is essential for this to become a reality. As I stated previously, in written comments on a draft of our report, the Under Secretary of Defense (Personnel and Readiness), strongly objected to our finding that the estimated personnel savings included in the economic analysis are unrealistic. Because none of the military services could validate an actual reduction in the number of personnel as a result of DTS implementation, and DOD’s comments did not include any additional support or documentation for its position, we continue to believe that the estimated annual personnel savings of $54.1 million are unrealistic. Although the department’s criteria do not require that a new economic analysis be prepared, the fiscal year 2005 defense authorization act requires the periodic review, but not less than annually, of every defense business system investment. If effectively implemented, this annual review process provides an excellent opportunity for DOD management to assess whether DTS is meeting its planned cost, schedule, and functionality goals. Going forward, such a review could serve as a useful management tool in making funding and other management decisions related to DTS. In conclusion, overhauling the department’s antiquated travel management practices and systems has been a daunting challenge for DOD. While it was widely recognized that this was a task that needed to be accomplished and savings could result, the underlying assumptions in support of those savings are not based on reliable data and therefore it is questionable whether the anticipated savings will materialize. Even though the overall savings are questionable, the successful implementation of DTS is critical to reducing the number of stovepiped, duplicative travel systems throughout the department. We have reported on numerous occasions that reducing the number of business systems within DOD can translate into savings that can be used for other mission needs. As noted above, management oversight will be an important factor in DTS achieving its intended goals. Equally important, however, will be the department’s ability to resolve the long-standing difficulties that DTS has encountered with its requirements management and system testing. Until these issues are resolved, more complete utilization of DTS will be problematic. Mr. Chairman, this concludes my prepared statement. We would be happy to answer any questions that you or other members of the Subcommittee may have at this time. For further information about this testimony, please contact McCoy Williams at (202) 512-9095 or [email protected], or Keith A. Rhodes at (202) 512-6412 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. In addition to the above contacts, the following individuals made key contributions to this testimony: Darby Smith, Assistant Director; J. Christopher Martin, Senior- Level Technologist; F. Abe Dymond, Assistant General Counsel; Beatrice Alff; Harold Brumm, Jr.; Francine DelVecchio; and Tarunkant Mithani. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | In 1995, the Department of Defense (DOD) began an effort to implement a standard departmentwide travel system. The Defense Travel System (DTS) is envisioned as DOD's standard end-to-end travel system. This testimony is based on GAO's September 2006 related report. Today's testimony highlights GAO's key findings with regard to the following objectives: (1) Were the two key assumptions made in the September 2003 economic analysis reasonable? (2) Was DOD taking action to ensure full utilization of DTS and gathering the data needed to monitor DTS utilization? and (3) Has DOD resolved several functional problems associated with weak system requirements and testing? To address these objectives, GAO (1) reviewed the September 2003 DTS economic analysis, (2) analyzed DTS utilization data, and (3) analyzed DTS flight information. GAO's analysis of the September 2003 DTS economic analysis found that the two key assumptions used to estimate annual net savings were not based on reliable information. Two cost components represent the majority of the over $56 million in estimated net savings--personnel savings and reduced commercial travel office (CTO) fees. In regard to the personnel savings, GAO's analysis found that the $24.2 million of personnel savings related to the Air Force and the Navy were not supported. Air Force and Navy DTS program officials stated that they did not anticipate a reduction in the number of personnel, but rather the shifting of staff from the travel function to other functions. The Naval Cost Analysis Division stated that the Navy will not realize any tangible personnel cost savings from the implementation of DTS. In regard to the CTO fees, the economic analysis assumed that 70 percent of all DTS airline tickets would either require no intervention or minimal intervention from the CTOs, resulting in an estimated annual net savings of $31 million. However, the sole support provided by the DTS program office was an article in a trade industry publication. The article was not based on information related to DTS, but rather on the experience of one private sector company. Furthermore, the economic analysis was not prepared in accordance with guidance prescribed by the Office of Management and Budget and DOD. DOD guidance stated that the life-cycle cost estimates should be verified by an independent party, but this did not occur. The economic analysis did not undertake an assessment of the effects of the uncertainty inherent in the estimates of benefits and costs. Because an economic analysis uses estimates and assumptions, it is critical that the imprecision in both the underlying data and assumptions be understood. Such an assessment is referred to as a sensitivity analysis. DOD acknowledged that DTS is not being used to the fullest extent possible, but lacks comprehensive data to effectively monitor its utilization. DOD's utilization data are based on a model that was developed in calendar year 2003. However, the model has not been completely updated to reflect actual DTS usage. The lack of accurate utilization data hinders management's ability to monitor progress toward the DOD vision of DTS as the standard travel system. GAO also found that the military services have initiated actions that are aimed at increasing the utilization of DTS. Finally, GAO found that DTS still has not addressed the underlying problems associated with weak requirements management and system testing. While DOD has acted to address concerns GAO previously raised, GAO found that DTS's requirements are still ambiguous and conflicting. For example, DTS displaying up to 25 flights for each inquiry is questionable because it is unclear whether this is a valid requirement. Until DOD improves DTS's requirements management practices, the department will not have reasonable assurance that DTS can provide the intended functionality. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
In 2002, the Secretary of Defense created MDA to develop an integrated system that would have the ability to intercept incoming missiles in all phases of their flight. In developing BMDS, MDA is using an incremental approach to field militarily useful capabilities as they become available. MDA plans to field capabilities in 2-year blocks. The configuration of a given block is intended to build on the work completed in previous blocks. For example, Block 2006 is intended to build on capabilities developed in Block 2004, and is scheduled to field capabilities during calendar years 2006–07. The integrated BMDS is comprised of various elements, three of which are intended to intercept threat missiles in their boost or ascent phase. Table 1 below describes each of these elements and shows the MDA projected dates for key decision points, initial capability, and tested operational capability. During the past year, Congress requested additional information and analyses on the boost and ascent phase elements from DOD. Specifically, House Report 109-119 on the Department of Defense Appropriations Bill for Fiscal Year 2006 directed the Secretary of Defense to conduct a study to review the early engagement of ballistic missiles to include boost and ascent phase intercepts and submit the report to the congressional defense committees. The report was to include, but not be limited to an assessment of the operational capabilities of systems against ballistic missiles launched from North Korea or a location in the Middle East against the continental United States, Alaska, or Hawaii; an assessment of the quantity of operational assets required for deployment periods of 7 days, 30 days, 90 days, and 1 year; basing options; and an assessment of life-cycle costs to include research and development efforts, procurement, deployment, operating, and infrastructure costs. In addition, the National Defense Authorization Act for Fiscal Year 2006 required the Secretary of Defense to assess missile defense programs designed to provide capability against threat ballistic missiles in the boost/ascent phase of flight. The purpose of this assessment was to compare and contrast capabilities of those programs (if operational) to defeat ballistic missiles from North Korea or a location in the Middle East against the continental United States, Alaska, or Hawaii; and asset requirements and costs for those programs to become operational with the capabilities referred to above. MDA, on behalf of DOD, prepared one report to satisfy both of the above requirements and sent the report to all four defense committees on March 30, 2006. The report included technical, operational, and cost information for each of the three boost and ascent phase BMDS elements. The remainder of this report discusses our assessment of the MDA report and how DOD can build on this information to support future key decision points. MDA’s March 2006 report to Congress included some useful technical and operational information on boost and ascent phase capabilities. However, the information in the report has several limitations—such as not including stakeholders in the analysis or explaining how assumptions affect results. Moving forward, DOD can enhance its ability to make informed decisions at future key decision points by including stakeholders DOD-wide in conducting analyses to provide complete technical and operational information. Otherwise, senior DOD and congressional decision makers may be limited in their ability to effectively assess the technical progress and operational effects of proceeding with one or more boost and ascent phase element. The March 2006 report to Congress contained some useful technical and operational information for Congress. For example, the report included a detailed description of the three boost and ascent phase elements, which could be useful for those unfamiliar with these elements. Additionally, the report listed upcoming knowledge points where DOD will review the progress MDA has made toward developing each of the boost and ascent phase elements. Further, the report discussed geographic areas where boost and ascent phase elements could intercept missiles shortly after launch based on desired technical capabilities. Also, MDA used a model to assess the desired capabilities of each BMDS element for the March 2006 report to Congress. Further, the modeling environment was used for several past BMDS analyses and the results were benchmarked against other models. Finally, MDA performed a sensitivity analysis that compared how the results in the modeling changed when different assumptions for targets’ propellants, ascent times, hardness levels, and burn times were used. To provide context, the report explained that the boost and ascent phase elements are in the early stages of development and that the operational concepts are not yet mature. The information in the March 2006 report has several limitations because the analyses did not involve stakeholders and did not clearly explain modeling assumptions and their effects on results as identified by relevant research standards. The relevant research standards and our prior work have shown that coordination with stakeholders from study design through reporting, and clearly explained assumptions and their effects on results, can enable DOD officials to make fully informed program decisions. As a result, the March 2006 report presents an incomplete picture of technical capabilities, such as development challenges to be overcome in order to achieve desired performance, and it does not clearly explain the effects of operational assumptions, such as basing locations, asset quantities, and base support requirements. As a step in the right direction, MDA stated that it plans to develop criteria to assess the boost/ascent phase elements at major decision points in a process involving the combatant commands. Although MDA officials told us that they consult stakeholders in a variety of forums other than the March 2006 report, they did not clearly state whether or how the services or other DOD stakeholders would be involved in developing criteria for key decision points or the extent to which their analyses would include information on technical and operational issues. MDA’s analyses did not involve soliciting or using information from key DOD stakeholders such as the services, combatant commands, and joint staff from study design through reporting. For example, officials from the Office of the Secretary of Defense for Program Analysis and Evaluation and the Defense Intelligence Agency stated there were areas where additional information would have improved the fidelity of the results. First, the officials stated that there is uncertainty that the boost and ascent phase elements would achieve their desired capabilities within the timeframe stated in the report. Second, officials from both organizations stated that the report could have been enhanced by presenting different views of the type and capability of threats the United States could face and when these threats could realistically be expected to be used by adversaries. Third, officials from the Office of the Secretary of Defense for Program Analysis and Evaluation said that the MDA report did not distinguish between countermeasures that could be used in the near term and countermeasures that may be more difficult to implement. MDA officials said that they worked with the Office of the Secretary of Defense for Program Analysis and Evaluation in conducting analyses before they began work on the March 2006 report. MDA also stated that it discussed the draft March 2006 report with Office of the Secretary of Defense for Program Analysis and Evaluation officials and included some of their comments in the report’s final version. However, without communication with stakeholders from study design through reporting, MDA may not have had all potential inputs that could have affected how the type, capability, and likelihood of countermeasures to the boost and ascent phase elements were presented in its report. Additionally, MDA did not solicit information from the services, combatant commands, or Joint Staff regarding operational issues that could have affected information about basing and the quantities of elements that could be required to support operations. Although the elements have to be located in close proximity to their intended targets, and the report discusses placing the elements at specific forward overseas locations, the report does not include a basing analysis explaining what would need to be done to support operations at these locations. Specifically, the report did not include any discussion of the infrastructure or security/force protection that will be needed for the BMDS elements. Although the report mentions some support requirements—such as the Airborne Laser’s need for unique maintenance and support equipment and skilled personnel to maintain the laser—the report did not fully explain how these support requirements would be determined, who would provide or fund them, or explain the operational effect if this support is not provided. For instance, without an adequate forward operating location, the boost and ascent phase elements would have to operate from much further away which would significantly limit the time an element is in close proximity to potential targets. Developing such information with the services, Joint Staff, and combatant commands could provide a much more complete explanation of operational issues and challenges. The services typically perform site analyses to ascertain what support is needed for a new weapon system at either a U.S. or overseas location. This comprehensive analysis examines a range of issues from fire protection to security, to infrastructure, to roads and airfields. In addition, U.S. Strategic Command and service officials told us that this type of support must be planned for in advance when adding a new system to any base, either in the United States or a forward location. MDA also did not involve stakeholders in assessing the quantities of each element for deployment periods of 7 days, 30 days, 90 days, and 1 year. The report stated that limited data exist at this time for a full assessment of this issue, and service, Joint Staff, and MDA officials acknowledged that the quantities of each element used in the report are MDA assumed quantities. Service, Joint Staff, and U.S. Strategic Command officials stated that they have not completed analyses to assess quantities the warfighters may require. We understand that operational concepts will continue to evolve and could affect required quantities. However, stakeholders such as the services, Joint Staff, or combatant commands could have assisted MDA in assessing potential quantities required for various deployment periods. In addition, MDA did not solicit information from the services, Joint Staff, or combatant commands to determine if those organizations were conducting force structure analyses for the boost and ascent phase elements. We learned that the Navy had done a preliminary analysis in July 2005 and that the Joint Staff has begun a capabilities mix study and both include, in part, an analysis of quantities. Thus, in preparing for future decision points, MDA’s analysis could be strengthened by including stakeholders to leverage other analyses. For example, MDA could have presented a range of scenarios to show how the quantities required to intercept adversary missiles could vary depending upon the number of sites covered and whether continuous, near-continuous, or sporadic coverage is provided. The March 2006 report to Congress did not clearly explain the assumptions used in the modeling of the BMDS elements’ capabilities and did not explain the effects those assumptions may have had on the results. First, the model inputs for the technical analysis assumed desired rather than demonstrated performance, and the report does not fully explain challenges in maturing technologies or how these performance predictions could change if the technologies are not developed as desired or assumed. For example, although the model MDA used is capable of showing different results based on different performance assumptions, the report did not explain how the number of successful intercepts may change if less than 100 percent of the desired technical capabilities are developed as envisioned. Thus the results represent the best expected outcome. Second, the report does not explain the current status of technical development or the challenges in maturing each element’s critical technologies as desired or assumed in the report. DOD best practices define Technology Readiness Levels on a scale of 1–9, and state which level should be reached to progress past specific program decision points. However, the March 2006 report does not explain the current Technology Readiness Level for any of the boost and ascent phase elements’ critical technologies or the extent to which the technology has to mature to attain the performance assumed in the report. For example, the report does not explain that some of the technologies for the Airborne Laser have to improve between 60 percent and 80 percent and the report does not discuss any of the challenges MDA faces in doing so. The March 2006 report to Congress provides cost estimates for each of the boost and ascent phase capabilities; however, the cost estimates in the report have several limitations that raise questions about their usefulness. We compared the report’s cost estimates with various DOD and GAO sources that describe key principles for developing accurate and reliable life-cycle cost estimates. Based on our analysis, we found that MDA did not include all cost categories, calculate costs based on warfighter quantities, and did not conduct a sensitivity analysis to assess the effects of cost drivers. Moreover, although MDA’s report acknowledges uncertainty in the cost estimates, the report does not fully disclose the limitations of the cost estimates. DOD can significantly improve the completeness of and confidence in cost estimates for boost and ascent phase capabilities as it prepares for future investment and budget decisions. For example, although DOD did not have its cost estimate for its March 2006 report independently verified because doing so would have taken several months, MDA officials agreed that independently verified cost estimates will be critical to support major decision points for boost and ascent phase capabilities. In addition, as these capabilities mature, MDA officials agreed that showing cost estimates over time and conducting uncertainty analyses will be needed to support key program and investment decisions. The cost estimates provided in the MDA report included some development, production, and operations/support costs for each boost and ascent phase element but were not fully developed or verified according to key principles for developing life-cycle cost estimates. Life-cycle costs are the total cost to the government for a program over its full life, including the costs of research and development, investment, operating and support, and disposal. Based on our comparison of the life-cycle cost estimates in the report with key principles for developing life-cycle cost estimates, we found that the estimates were incomplete in several ways. First, the cost estimates did not include all cost categories, such as costs to establish and sustain operations at U.S. bases. Instead, MDA assumed that the elements would be placed at existing bases with sufficient base support, infrastructure and security; however, some of these costs such as infrastructure could be significant. For example, an MDA planning document cited about $87 million for infrastructure costs to support a ground-based BMDS element (Terminal High Altitude Area Defense). Army officials confirmed that training facilities, missile storage buildings, and a motor pool were built at a U.S. base specifically to support this element and it is likely that similar infrastructure would be needed to support the land-based Kinetic Energy Interceptor. Additionally, MDA’s cost estimates did not include costs to establish and sustain operations at forward overseas locations, even though the report states that the elements will have to be located in close proximity to their targets, and the operational concepts for Kinetic Energy Interceptor and Airborne Laser, although in early development, state that these elements will be operated from forward locations. Again, these are important factors to consider—the Airborne Laser operational concept and the MDA report acknowledge that unique support will be required to support operations at any forward location for the Airborne Laser such as chemical facilities, unique ground support equipment, and maintenance. Service, Joint Staff, and U.S. Strategic Command officials also said that these elements would have to be located forward and could be used as a strategic deterrent in peacetime. Second, the production and operating cost estimates were not based on warfighter quantities, that is, quantities of each element that the services and combatant commands may require to provide needed coverage of potential targets. MDA assumed a certain quantity of each element. For example, MDA officials told us that they assumed 96 Standard Missile-3 block 2A missiles because, at the time MDA prepared the report, they planned to buy 96 block 1A missiles developed to intercept short-range ballistic missiles. However, MDA did not solicit input from the services, Joint Staff, or combatant commands on whether they had done or begun analyses to determine element quantities. Third, MDA did not conduct a sensitivity analysis to identify the effects of cost drivers. A sensitivity analysis is a way to identify risk by demonstrating how the cost estimates would change in response to different values for specific cost drivers. Therefore, a sensitivity analysis should be performed when developing cost estimates, and the results should be documented and reported to decision makers. This means, for example, that MDA could have computed costs with and without significant categories of costs such as forward bases to identify the effect that adding forward bases would have on operating costs. The House Armed Services Committee report on the National Defense Authorization Bill for Fiscal Year 2006 recognized that operational capabilities and costs must be taken into account when making decisions on future funding support. Finally, the cost estimates did not estimate costs over time—a process known as time phasing—which can assist decision makers with budgetary decisions. The MDA report showed an annual cost estimate but did not state for how many years the development, production, and operating costs may be incurred. Although MDA officials stated they did not prepare time-phased cost estimates in order to prepare the report to Congress in a timely manner, they agreed that showing cost estimates over time would be important information to support investment decisions at key decision points. Key principles for developing life-cycle cost estimates also include two steps for assessing the confidence of cost estimates. However, MDA did not take these steps to assess the confidence of the estimates reported in March 2006. First, the Missile Defense Agency did not conduct a risk analysis to assess the level of uncertainty for most of the cost estimates in the MDA report. Risk and uncertainty refer to the fact that, because a cost estimate is a prediction of the future, it is likely that the estimated cost will differ from the actual cost. It is useful to perform a risk analysis to quantify the degree of uncertainty in the estimates. By using standard computer simulation techniques, an overall level of uncertainty can be developed for cost estimates. In contrast, MDA officials told us that they could only provide a judgmental confidence level for the most of the cost estimates. Second, MDA did not have the cost estimates in the report verified by an independent organization such as DOD’s Cost Analysis Improvement Group because doing so would have taken several months. However, MDA officials agreed that independent verification of cost estimates would be important information to support investment decisions at key decision points. According to the key principles that we have identified, all life-cycle cost estimates should be independently verified to assure accuracy, completeness, and reliability. MDA has recognized the value in independently developed cost estimates. In 2003, MDA and the Cost Analysis Improvement Group developed a memorandum of understanding that said, in part, the Cost Analysis Improvement Group would develop independent cost estimates for the approved BMDS and its elements as appropriate during development in anticipation of transition to production, but MDA officials said that little work was completed under this agreement, which has expired. Developing complete cost estimates in which decision makers can have confidence is important since life-cycle cost estimates usually form the basis for investment decisions and annual budget requests. Specifically, life-cycle cost estimates that include all cost categories, show costs over time, include warfighter quantities, include an assessment of cost drivers, and are independently verified are important because accurate life-cycle cost estimates can be used in formulating funding requests contained in the President’s Budget and DOD’s future funding plan, the Future Years Defense Program (FYDP) submitted to Congress. Therefore, there is a need for DOD to provide transparent budget and cost planning information to Congress. In May 2006, GAO reported that the FYDP, a major source of budget and future funding plans, does not provide complete and transparent data on ballistic missile defense operational costs because the FYDP’s structure does not provide a way to identify and aggregate these costs. It is important that Congress has confidence in boost and ascent phase estimates because Congress has indicated that it is concerned with the affordability of pursuing both the Airborne Laser and Kinetic Energy Interceptor programs in parallel through 2008. As we reported in 2003, DOD assumes increased investment risk by not having information available for decision makers at the right time, and the level of anticipated spending magnifies this risk. Otherwise, senior DOD and congressional decision makers may be limited in their ability to assess the relative cost of the elements if all cost categories are not included and cost drivers are not identified. Considering competing demands, this could also limit Congress’s ability to consider investment decisions or evaluate whether continued expenditures are warranted. MDA officials stated that, in developing the cost estimates for the March 2006 report, they decided not to follow some of the key principles for developing life-cycle cost estimates such as time phasing and independent verification of the cost estimates in order to complete the report in a timely manner. However, the officials also agreed that these key principles are important in developing complete, accurate, and reliable life-cycle cost estimates for supporting investment decisions at key decision points. Therefore, in the future, when preparing cost estimates to be used in support of key decision points, MDA could provide decision makers with more complete, accurate, and reliable cost estimates by better adhering to key principles for developing life-cycle cost estimates. Our review of MDA’s March 2006 report on boost and ascent phase elements identified a number limitations but helps to illuminate the kind of information that DOD and congressional decision makers will need following upcoming tests for boost and ascent phase elements. We recognize that the March 2006 report was prepared in response to congressional direction rather than to support program decisions. We also recognize that, at the time of MDA’s report, these elements were early in their development and information was incomplete and changing. Thus, the focus of our analysis was to identify additional information that could enhance future program and investment decisions. In particular, the House Armed Services Committee has raised questions about the affordability of pursuing both the Kinetic Energy Interceptor and the Airborne Laser in parallel through the projected knowledge point demonstrations, which are now scheduled for 2008 and 2009 respectively. It is important that these decisions be both well-informed and transparent because of the long-term funding consequences. DOD and congressional decision makers’ ability to assess which elements can be fully developed, integrated, and operated relative to the others will be enhanced if they have the benefit of information based on more rigorous analysis than that contained in MDA’s March 2006 report. Looking forward, as DOD strengthens its analyses to support future key decisions, DOD and congressional decision makers will be able to use more complete information to assess force structure, basing, support, and infrastructure requirements, as well as technical maturity, budget requests, and FYDP spending plans, in deciding whether or not to continue developing one, two, or all three boost and ascent phase elements and in what quantities. To provide decision makers with information that enables them to clearly understand the technical progress and operational implications of each boost and ascent phase element and make fully informed, fact-based, program decisions at future key decision points, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology, and Logistics to take the following actions to support key decision points for the BMDS boost and ascent phase elements: Include all DOD stakeholders (including services, combatant commands, Joint Staff) in developing and analyzing operational issues regarding what is needed to support operations at U.S. bases and potential forward locations, including basing assessments, force structure and quantity requirements, infrastructure, security/force protection, maintenance, and personnel. Provide specific information on the technical progress of each element. Specifically, the analysis should explain current technical maturity versus desired technical maturity and capabilities of all major components and subsystems, reasonable model inputs on element performance, and provide a clear explanation of assumptions and their effect on results. Use the results of these analyses at each key decision point. To provide decision makers with complete and reliable data on the costs of each boost/ascent phase BMDS element to enhance investment and budget decisions, we recommend that the Secretary of Defense take the following actions: Direct the Under Secretary of Defense for Acquisition, Technology, and Logistics to require MDA to prepare and—to support key decision points—periodically update a full life-cycle cost estimate for each boost/ascent phase element, in accordance with key principles for developing accurate and reliable life-cycle cost estimates, that includes all operational costs, including costs to establish and sustain operations at U.S. bases and forward locations, and that is based on warfighter quantities, includes sensitivity analyses, and reflects time phasing. Direct an independent group, such as the Cost Analysis Improvement Group, to prepare an independent life-cycle cost estimate for each capability at each key decision point. Direct MDA and services to report independently verified life-cycle cost estimates along with budget requests and FYDP funding plans for each boost/ascent phase element. In written comments on a draft of this report, DOD agreed with our recommendations regarding the need for analysis of technical progress and operational issues to support key boost and ascent phase element decision points. DOD also agreed that an independent life-cycle cost estimate may be needed to inform some key decision points while they may not be needed at other decision points. However, DOD did not agree to prepare and periodically update full life-cycle cost estimates for each boost and ascent phase element to support key decision points, and report independently verified life cycle cost estimates with budget requests and FYDP funding plans. As discussed below, we continue to believe our recommendations have merit and that DOD should take the additional actions we have recommended to provide a rigorous analytical basis for future decisions, enhance the transparency of its analyses, and increase accountability for key decisions that could involve billions of dollars. The department’s comments are reprinted in their entirety in appendix II. DOD agreed with our recommendations that all DOD stakeholders be included in developing and analyzing operational issues, that specific information on technical progress be provided to explain current versus desired capabilities, and that the results of both analyses be used at key decision points. DOD stated in its comments that officials from MDA, the military departments, the combatant commanders, and other organizations are collaborating to develop an operational BMDS. Moreover, the annual BMDS Transition and Transfer Plan is coordinated with the service secretaries and other stakeholders and serves as a repository for plans, agreements, responsibilities, authorities, and issues. DOD also stated that key program decisions are and will continue to be informed by detailed technical analysis, including assessment of element technical maturity. However, DOD did not clearly explain how future decision making will be enhanced or how analyses of operational issues will be conducted if, as in the case of the Kinetic Energy Interceptor, DOD has not assigned a service responsibility for operating the element once it is developed. We continue to believe that DOD and congressional decision makers will need more complete information on support requirements at upcoming decision points as well as a clear comparison of current versus desired technical capabilities in deciding whether or not to continue developing one, two, or all three boost and ascent phase elements. Regarding our recommendations to improve cost estimates used to support key investment decisions, DOD partially concurred that independent life-cycle cost estimates may be required to inform some key decision points but stated that other key decision points may not. However, DOD did not agree that it should routinely prepare and periodically update a full life-cycle cost estimate for each boost and ascent phase element. DOD said that it continuously assesses all aspects of its development efforts and will direct an independent evaluation of life-cycle costs for boost and ascent phase elements if circumstances warrant or if MDA’s Director declares an element mature enough to provide a militarily useful capability. However, if, as DOD’s comments suggest, such costs are not assessed until circumstances warrant or MDA’s Director declares an element mature enough to provide a militarily useful capability, these costs may not be available early enough to help shape important program and investment decisions and consider trade-offs among elements. Moreover, DOD’s Operating and Support Cost Estimating Guide, published by the Cost Analysis Improvement Group, states that when the Cost Analysis Improvement Group assists the Office of the Secretary of Defense components in their review of program costs, one purpose is to determine whether a new system will be affordable to operate and support. Therefore, such analysis must be done early enough to provide cost data that will be considered in making a decision to field, produce, or transition an element. We continue to believe our recommendation has merit because the development of life-cycle cost estimates that include potential operations and support costs would improve the information available to decision makers and increase accountability for key decisions that could involve billions of dollars at a time when DOD will likely face competing demands for resources. Finally, DOD did not agree to report independently verified life-cycle cost estimates along with budget requests and FYDP funding plans for each boost and ascent phase element. DOD stated that operations and support segments of the budget are organized by functional area rather than by weapon system and are dependent on operations and support concepts of the employing military department. DOD further stated that development of total life-cycle cost estimates for operational BMDS capabilities requires agreement between MDA and the lead military department on roles and responsibilities for fielded BMDS capabilities that transcend the annual transition planning cycle but serve as a basis for budget submittals. We recently reported that MDA enjoys flexibility in developing BMDS but this flexibility comes at the cost of transparency and accountability. One purpose of cost estimates is to support the budget process by providing estimates of the funding required to efficiently execute a program. Also, independent verification of cost estimates allows decision makers to gauge whether the program is executable. Thus, cost estimating is the basis for establishing and defending budgets and is at the heart of the affordability issue. This principle is stated in DOD procedures which specify that when cost results are presented to the Office of the Secretary of Defense Cost Analysis Improvement Group, the program office- developed life-cycle cost estimate should be compared with the FYDP and differences explained. Therefore, we continue to believe that our recommendation has merit because, without an independent cost estimate that can be compared to budget requests and FYDP funding plans, congressional decision makers may not have all the necessary information to assess the full extent of future resource requirements if the boost and ascent phase capabilities go forward, or assess the completeness of the cost estimates that are in the budget request and FYDP funding plans. We are sending copies of this report to the Secretary of Defense; the Commander, U.S. Strategic Command; the Director, Missile Defense Agency; Chairman, the Joint Chiefs of Staff; and the Chiefs of Staff of the Army, Navy, and Air Force. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions, please call either Janet St. Laurent on (202) 512-4402 or Paul Francis on (202) 512-2811. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff members who made key contributions to this report are listed in appendix III. During this review, we focused on assessing the analytical approach the Missile Defense Agency (MDA) used to develop its March 2006 report to Congress, as well as the methodology for developing the cost estimates for each of the three Ballistic Missile Defense System (BMDS) boost and ascent phase elements. To assess the extent to which the Department of Defense (DOD) is developing technical and operational information useful for oversight and that will support decision making at key points, we compared the analytical approach DOD used to develop its March 2006 report with generally accepted research standards that are relevant for defense studies such as this, that define a sound and complete study, and that cover all phases of a study—design, execution, and presentation of results. The following were our sources for these standards: GAO, Government Auditing Standards: 2003 Revision, GAO-03-673G (Washington, D.C.: June 2003); GAO, Designing Evaluations, GAO/PEMD-10.1.4 (Washington, D.C.: GAO, Dimensions of Quality, GAO/QTM-94-1 (Washington, D.C.: RAND Corporation, RAND Standards for High-Quality Research and Analysis (Santa Monica, Calif.: June 2004); Air Force, Office of Aerospace Studies, Analysts Handbook: On Understanding the Nature of Analysis (January 2000); Air Force, Office of Aerospace Studies, Air Force Analysis Handbook, A Guide for Performing Analysis Studies: For Analysis of Alternatives or Functional Solution Analysis (July 2004); Department of Defense, DOD Modeling and Simulation (M&S) Verification, Validation, Accreditation (VV&A), Instruction 5000.61 (Washington, D.C.: May 2003); Department of Defense, Data Collection, Development, and Management in Support of Strategic Analysis, Directive 8260.1 (Washington, D.C.: Dec. 2, 2003); and Department of Defense, Implementation of Data Collection, Development, and Management for Strategic Analyses, Instruction 8260.2 (Washington, D.C.: Jan. 21, 2003). For a more complete description of these standards and how we identified them, see GAO-06-938, appendix I. In applying these standards, we focused on the extent to which stakeholders were involved in study design and analysis as well as the extent to which assumptions were reasonable and their effects on results were clearly explained. We assessed MDA briefings that explained the modeling used for the technical analysis projecting the elements’ capabilities. To assess the basis for the assumed performance parameters used to model each element’s performance, we traced and verified a nonprobability sample of these parameters to their source documentation and concluded that they were generally supported. To evaluate the DOD report’s characterization of threats, we reviewed Defense Intelligence Agency documents and discussed the type and capability of threats and expected BMDS capabilities with officials from the Office of the Secretary of Defense for Program Analysis and Evaluation and the Defense Intelligence Agency. In addition, to gain an understanding of the extent to which DOD has assessed warfighter quantities for the boost and ascent phase elements, the development of operational concepts, and operational implications of employing the boost and ascent phase elements at forward locations, we evaluated DOD and service guidance on assessing sites and support for new weapon systems and discussed these issues with officials from the Joint Staff; U.S. Army Headquarters and Space and Missile Defense Command; U.S. Strategic Command; the office of the Chief of Naval Operations Surface Warfare Directorate, Ballistic Missile Defense Division; Air Combat Command; and the office of the Secretary of the Air Force for Acquisition, Global Power Directorate. Finally, we discussed the results of all our analyses with officials in the Joint Staff; U.S. Strategic Command; the Army’s Space and Missile Defense Command; Office of the Secretary of Defense for Acquisition, Technology, and Logistics; Missile Defense Agency; the office of the Chief of Naval Operations Surface Warfare Directorate, Ballistic Missile Defense Division; the office of the Secretary of the Air Force for Acquisition Global Power Directorate; and Air Combat Command. To assess the extent to which DOD presented cost information to Congress that is complete and transparent, we first assessed how MDA developed its estimates and then compared the method by which those estimates were prepared to key principles compiled from various DOD and GAO sources that describe how to develop accurate and reliable life-cycle cost estimates to determine their completeness and the extent to which DOD took steps to assess confidence in the estimates. The following were our sources for compiling the cost criteria: Department of Defense, Assistant Secretary of Defense (Program Analysis and Evaluation), Cost Analysis Guidance and Procedures, DOD Manual 5000.4-M (December 1992); Department of Defense, Office of the Secretary of Defense Cost Analysis Improvement Group, Operating and Support Cost Estimating Guide (May 1992); Department of Defense, Defense Acquisition University, Defense Acquisition Guidebook (online at http://akss.dau.mil/dag); Department of Defense, Defense Acquisition University, Introduction to Cost Analysis (April 2006); Air Force, Office of Aerospace Studies, Air Force Analysis Handbook: A Guide for Performing Analysis Studies for Analysis of Alternatives or Functional Solution Analysis (July 2004); Air Force, Base Support and Expeditionary Site Planning, Air Force Instruction 10-404 (March 2004); and GAO, GAO Cost Assessment Guide (currently under development). In addition, we met with DOD officials from MDA, U.S. Strategic Command, the Joint Staff, Army, Navy and Air Force to determine the extent to which they were involved in developing the cost estimates for the DOD report. Finally, we corroborated our methodology and results with officials from the Office of the Under Secretary of Defense, Program, Analysis and Evaluation (Cost Analysis Improvement Group) and the Office of the Under Secretary of Defense (Comptroller) and they agreed that our methodology for examining the report’s cost estimates was reasonable and consistent with key principles for developing accurate and reliable life-cycle cost estimates. We identified some data limitations with the cost estimates which we discuss in this report. We provided a draft of this report to DOD for its review and incorporated its comments where appropriate. Our review was conducted between June 2006 and February 2007 in accordance with generally accepted government auditing standards. In addition to the individuals named above, Barbara H. Haynes and Gwendolyn R. Jaffe, Assistant Directors; Brenda M. Waterfield; Todd Dice; Jeffrey R. Hubbard; Nabajyoti Barkakati; Hai V. Tran; Ron La Due Lake; and Susan C. Ditto made key contributions to this report. Defense Transportation: Study Limitations Raise Questions about the Adequacy and Completeness of the Mobility Capabilities Study and Report. GAO-06-938. Washington, D.C.: September 20, 2006. Defense Management: Actions Needed to Improve Operational Planning and Visibility of Costs for Ballistic Missile Defense. GAO-06-473. Washington, D.C.: May 31, 2006. Defense Acquisitions: Missile Defense Agency Fields Initial Capability but Falls Short of Original Goal. GAO-06-327. Washington, D.C.: March 15, 2006. Defense Acquisitions: Actions Needed to Ensure Adequate Funding for Operation and Sustainment of the Ballistic Missile Defense System. GAO-05-817. Washington, D.C.: September 6, 2005. Military Transformation: Actions Needed by DOD to More Clearly Identify New Triad Spending and Develop a Long-term Investment Approach. GAO-05-962R. Washington, D.C.: August 4, 2005. Military Transformation: Actions Needed by DOD to More Clearly Identify New Triad Spending and Develop a Long-term Investment Approach. GAO-05-540. Washington, D.C.: June 30, 2005. Defense Acquisitions: Status of Ballistic Missile Defense Program in 2004. GAO-05-243. Washington, D.C.: March 31, 2005. Future Years Defense Program: Actions Needed to Improve Transparency of DOD’s Projected Resource Needs. GAO-04-514. Washington, D.C.: May 7, 2004. Missile Defense: Actions Are Needed to Enhance Testing and Accountability. GAO-04-409. Washington, D.C.: April 23, 2004. Missile Defense: Actions Being Taken to Address Testing Recommendations, but Updated Assessment Needed. GAO-04-254. Washington, D.C.: February 26, 2004. Missile Defense: Additional Knowledge Needed in Developing System for Intercepting Long-Range Missiles. GAO-03-600. Washington, D.C.: August 21, 2003. Missile Defense: Alternate Approaches to Space Tracking and Surveillance System Need to Be Considered. GAO-03-597. Washington, D.C.: May 23, 2003. Missile Defense: Knowledge-Based Practices Are Being Adopted, but Risks Remain. GAO-03-441. Washington, D.C.: April 30, 2003. Missile Defense: Knowledge-Based Decision Making Needed to Reduce Risks in Developing Airborne Laser. GAO-02-631. Washington, D.C.: July 12, 2002. Missile Defense: Review of Results and Limitations of an Early National Missile Defense Flight Test. GAO-02-124. Washington, D.C.: February 28, 2002. Missile Defense: Cost Increases Call for Analysis of How Many New Patriot Missiles to Buy. GAO/NSIAD-00-153. Washington, D.C.: June 29, 2000. Missile Defense: Schedule for Navy Theater Wide Program Should Be Revised to Reduce Risk. GAO/NSIAD-00-121. Washington, D.C.: May 31, 2000. | The Department of Defense (DOD) has spent about $107 billion since the mid-1980s to develop a capability to destroy incoming ballistic missiles. DOD has set key decision points for deciding whether to further invest in capabilities to destroy missiles during the initial phases after launch. In March 2006, DOD issued a report on these capabilities in response to two mandates. To satisfy a direction from the House Appropriations Committee, GAO agreed to review the report. To assist Congress in evaluating DOD's report and preparing for future decisions, GAO studied the extent to which DOD (1) analyzed technical and operational issues and (2) presented complete cost information. To do so, GAO assessed the report's methodology, explanation of assumptions and their effects on results, and whether DOD followed key principles for developing life-cycle costs. The report DOD's Missile Defense Agency (MDA) submitted to Congress in March 2006 included some useful technical and operational information on boost and ascent phase capabilities by describing these elements, listing upcoming decision points, and discussing geographic areas where boost and ascent elements could intercept missiles shortly after launch. However, the information in the report has several limitations because the analysis did not involve key DOD stakeholders such as the services and combatant commands in preparing the report and did not clearly explain modeling assumptions and their effects on results as required by relevant research standards. MDA's report states that, at this time, some data is limited, and operational concepts that discuss operations from forward locations have not been fully vetted with the services and combatant commands. However, the report did not explain how each element's performance may change if developing technologies do not perform as expected. Also, it did not address the challenges in establishing bases at the locations cited or provide information on the quantity of each element required for various deployment periods. Moving forward, DOD has an opportunity to involve stakeholders in analyzing operational and technical issues so that senior DOD and congressional leaders will have more complete information on which to base upcoming program decisions following key tests in 2008 and 2009 for the Kinetic Energy Interceptor and Airborne Laser boost and ascent phase programs. MDA's report provided some cost estimates for developing and fielding boost and ascent phase capabilities, but these estimates have several limitations and will require refinement before they can serve as a basis for DOD and congressional decision makers to compare life-cycle costs for the elements. MDA's report states that there is uncertainty in estimating life-cycle costs because the elements are early in development. However, based on a comparison of the estimates in the report with key principles for developing life-cycle cost estimates, GAO found that MDA's estimates did not include all cost categories, including costs to establish and sustain operations at U.S. bases and at forward overseas operating locations. Also, MDA's estimates did not calculate costs based on realistic quantities of each element the combatant commanders or services would need to conduct the mission. Finally, MDA did not conduct a sensitivity analysis to assess the effect of key cost drivers on total costs. MDA officials stated that further analysis of the costs for each element along with measures to assess their confidence would help to better inform DOD and congressional decision makers in making investment decisions following key tests in 2008 and 2009. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
An effective military medical surveillance system needs to collect reliable information on (1) the health care provided to service members before, during, and after deployment, (2) where and when service members were deployed, (3) environmental and occupational health threats or exposures during deployment (in theater) and appropriate protective and countermeasures, and (4) baseline health status and subsequent health changes. This information is needed to monitor the overall health condition of deployed troops, inform them of potential health risks, as well as maintain and improve the health of service members and veterans. In times of conflict, a military medical surveillance system is particularly critical to ensure the deployment of a fit and healthy force and to prevent disease and injuries from degrading force capabilities. DOD needs reliable medical surveillance data to determine who is fit for deployment; to prepare service members for deployment, including providing vaccinations to protect against possible exposure to environmental and biological threats; and to treat physical and psychological conditions that result from deployment. DOD also uses this information to develop educational measures for service members and medical personnel to ensure that service members receive appropriate care. Reliable medical surveillance information is also critical for VA to carry out its missions. In addition to VA’s better known missions—to provide health care and benefits to veterans and medical research and education— VA has a fourth mission: to provide medical backup to DOD in times of war and civilian health care backup in the event of disasters producing mass casualties. VA needs reliable medical surveillance data from DOD to treat casualties of military conflicts, provide health care to veterans who have left active duty, assist in conducting research should troops be exposed to environmental or occupational hazards, and identify service- connected disabilities to adjudicate veterans’ disability claims. Investigations into the unexplained illnesses of service members and veterans who had been deployed to the Persian Gulf uncovered the need for DOD to implement an effective medical surveillance system to obtain comprehensive medical data on deployed service members, including Reservists and National Guardsmen. Epidemiological and health outcome studies to determine the causes of these illnesses have been hampered by a lack of (1) complete baseline health data on Gulf War veterans; (2) assessments of their potential exposure to environmental health hazards; and (3) specific health data on care provided before, during, and after deployment. The Presidential Advisory Committee on Gulf War Veterans’ Illnesses’ and IOM’s 1996 investigations into the causes of illnesses experienced by Gulf War veterans confirmed the need for more effective medical surveillance capabilities. The National Science and Technology Council, as tasked by the Presidential Advisory Committee, also assessed the medical surveillance system for deployed service members. In 1998, the council reported that inaccurate recordkeeping made it extremely difficult to get a clear picture of what risk factors might be responsible for Gulf War illnesses. It also reported that without reliable deployment and health assessment information, it was difficult to ensure that veterans’ service-related benefits claims were adjudicated appropriately. The council concluded that the Gulf War exposed many deficiencies in the ability to collect, maintain, and transfer accurate data describing the movement of troops, potential exposures to health risks, and medical incidents in theater. The council reported that the government’s recordkeeping capabilities were not designed to track troop and asset movements to the degree needed to determine who might have been exposed to any given environmental or wartime health hazard. The council also reported major deficiencies in health risk communications, including not adequately informing service members of the risks associated with countermeasures such as vaccines. Without this information, service members may not recognize potential side effects of these countermeasures or take prompt precautionary actions, including seeking medical care. In response to these reports, DOD strengthened its medical surveillance system under Operation Joint Endeavor when service members were deployed to Bosnia-Herzegovina, Croatia, and Hungary. In addition to implementing departmentwide medical surveillance policies, DOD developed specific medical surveillance programs to improve monitoring and tracking environmental and biomedical threats in theater. While these efforts represented important steps, a number of deficiencies remained. On the positive side, the Assistant Secretary of Defense (Health Affairs) issued a health surveillance policy for troops deploying to Bosnia. This guidance stressed the need to (1) identify health threats in theater, (2) routinely and uniformly collect and analyze information relevant to troop health, and (3) disseminate this information in a timely manner. DOD required medical units to develop weekly reports on the incidence rates of major categories of diseases and injuries during all deployments. Data from these disease and non-battle-injury reports showed theaterwide illness and injury trends so that preventive measures could be identified and forwarded to the theater medical command regarding abnormal trends or actions that should be taken. DOD also established the U.S. Army Center for Health Promotion and Preventive Medicine—a major enhancement to DOD’s ability to perform environmental monitoring and tracking. For example, the center operates and maintains a repository of service members’ serum samples—the largest serum repository in the world—for epidemiological studies to examine potential health issues for services members and veterans. The center also operates and maintains a system for integrating, analyzing, and reporting data from multiple sources relevant to the health and readiness of military personnel. This capability was augmented with the establishment of the 520th Theater Army Medical Laboratory—a deployable public health laboratory for providing environmental sampling and analysis in theater. The sampling results can be used to identify specific preventive measures and safeguards to be taken to protect troops from harmful exposures and to develop procedures to treat anyone exposed to health hazards. During Operation Joint Endeavor, this laboratory was used in Tuzla, Bosnia—where most of the U.S. forces were located—to conduct air, water, soil, and other environmental monitoring. Despite the Department’s progress, we and others have reported on DOD’s implementation difficulties during Operation Joint Endeavor and the shortcomings in DOD’s ability to maintain reliable health information on service members. Knowledge of who is deployed and their whereabouts is critical for identifying individuals who may have been exposed to health hazards while deployed. However, in May 1997, we reported that inaccurate information on who was deployed and where and when they were deployed—a problem during the Gulf War—continued to be a concern during Operation Joint Endeavor.For example, we found that the Defense Manpower Data Center (DMDC) database—where military services are required to report deployment information—did not include records for at least 200 Navy service members who were deployed. Conversely, the DMDC database included Air Force personnel who were never actually deployed. In addition, we reported that DOD had not developed a system for tracking the movement of service members within theater. IOM also reported that during Operation Joint Endeavor, locations of deployed service members were still not systematically documented or archived for future use. We also reported in May 1997 that for the more than 600 Army personnel whose medical records we reviewed, DOD’s centralized database for postdeployment medical assessments did not capture 12 percent of those assessments conducted in theater and 52 percent of those conducted after returning home. These data are needed by epidemiologists and other researchers to assess at an aggregate level the changes that have occurred between service members’ pre- and postdeployment health assessments. Further, many service members’ medical records did not include complete information on the in-theater postdeployment medical assessments that had been conducted. The Army’s European Surgeon General attributed missing in-theater health information to DOD’s policy of having service members hand-carry paper assessment forms from the theater to their home units, where their permanent medical records were maintained. The assessments were frequently lost en route. We have also reported that not all medical encounters in theater were being recorded in individual records. Our 1997 report indicated that this problem was particularly common for immunizations given in theater. Detailed data on service members’ vaccine history are vital for scheduling the regimen of vaccinations and boosters and for tracking individuals who received vaccinations from a specific vaccine lot in the event that health concerns about the lot emerge. We found that almost one-fourth of the service members’ medical records that we reviewed did not document the fact that they had received a vaccine for tick-borne encephalitis. In addition, in its 2000 report, IOM cited limited progress in medical recordkeeping for deployed active duty and reserve forces and emphasized the need for records of immunizations to be included in individual medical records. Responding to our and others’ recommendations to improve information on service members’ deployments, in-theater medical encounters, and immunizations, DOD has continued to revise and expand its policies related to medical surveillance, and the system continues to evolve. In addition, in 2000, DOD released its Force Health Protection plan, which presents the Department’s vision for protecting deployed forces and includes the goal of joint medical logistics support for all services by 2010. The vision articulated in this capstone document emphasizes force fitness and health preparedness, casualty prevention, and casualty care and management. A key component of the plan is improved monitoring and surveillance of health threats in military operations and more sophisticated data collection and recordkeeping before, during, and after deployments. However, IOM criticized DOD’s progress in implementing its medical surveillance program as well as its failure to implement several recommendations that IOM had made. In addition, IOM raised concerns about DOD’s ability to achieve the vision outlined in the Force Health Protection plan. We have also reported that some of DOD’s programs designed to improve medical surveillance have not been fully implemented. IOM’s 2000 report presented the results of its assessment of DOD’s progress in implementing recommendations for improving medical surveillance made by IOM and several others. IOM stated that, although DOD generally concurred with the findings of these groups, DOD had made few concrete changes at the field level. In addition, environmental and medical hazards were not yet well integrated in the information provided to commanders. The IOM report notes that a major reason for this lack of progress is that no single authority within DOD has been assigned responsibility for the implementation of the recommendations and plans. IOM said that because of the complexity of the tasks and the overlapping areas of responsibility involved, the single authority must rest with the Secretary of Defense. In its report, IOM describes six strategies that in its view demand further emphasis and require greater efforts by DOD: Use a systematic process to prospectively evaluate non-battle-related risks associated with the activities and settings of deployments. Collect and manage environmental data and personnel location, biological samples, and activity data to facilitate analysis of deployment exposures and to support clinical care and public health activities. Develop the risk assessment, risk management, and risk communication skills of military leaders at all levels. Accelerate implementation of a health surveillance system that completely spans an individual’s time in service. Implement strategies to address medically unexplained symptoms in deployed populations. Implement a joint computerized patient record and other automated recordkeeping that meets the information needs of those involved with individual care and military public health. DOD guidance established requirements for recording and tracking vaccinations and automating medical records for archiving and recalling medical encounters. While our work indicates that DOD has made some progress in improving its immunization information, the Department faces numerous challenges in implementing an automated medical record. DOD also recently established guidelines and additional policy initiatives for improving military medical surveillance. In October 1999, we reported that DOD’s Vaccine Adverse Event Reporting System—which relies on medical staff or service members to provide needed vaccine data—may not have included some information on adverse reactions because these personnel had not received guidance needed to submit reports to the system. According to DOD officials, medical staff may also report any other reaction they think might be caused by the vaccine, but because this is not stated explicitly in DOD’s guidance on vaccinations, some medical staff may be unsure about which reactions to report. Also, in April 2000, we testified that vaccination data were not consistently recorded in paper records and in a central database, as DOD requires.For example, when comparing records from the database with paper records at four military installations, we found that information on the number of vaccinations given to service members, the dates of the vaccinations, and the vaccine lot numbers were inconsistent at all four installations. At one installation, the database and records did not agree 78 percent to 92 percent of the time. DOD has begun to make progress in implementing our recommendations, including ensuring timely and accurate data in its immunization tracking system. The Gulf War revealed the need to have information technology play a bigger role in medical surveillance to ensure that information is readily accessible to DOD and VA. In August 1997, DOD established requirements that called for the use of innovative technology, such as an automated medical record device that can document inpatient and outpatient encounters in all settings and that can archive the information for local recall and format it for an injury, illness, and exposure surveillance database. Also, in 1997, the President, responding to deficiencies in DOD’s and VA’s data capabilities for handling service members’ health information, called for the two agencies to start developing a comprehensive, lifelong medical record for each service member. As we reported in April 2001, DOD’s and VA’s numerous databases and electronic systems for capturing mission-critical data, including health information, are not linked and information cannot be readily shared. DOD has several initiatives under way to link many of its information systems—some with VA. For example, in an effort to create a comprehensive, lifelong medical record for service members and veterans and to allow health care professionals to share clinical information, DOD and VA, along with the Indian Health Service (IHS), initiated the Government Computer-Based Patient Record (GCPR) project in 1998. GCPR is seen as yielding a number of potential benefits, including improved research and quality of care, and clinical and administrative efficiencies. However, our April 2001 report described several factors— including planning weaknesses, competing priorities, and inadequate accountability—that made it unlikely that DOD and VA would accomplish GCPR or realize its benefits in the near future. To strengthen the management and oversight of GCPR, we made several recommendations, including designating a lead entity with a clear line of authority for the project and creating comprehensive and coordinated plans for sharing meaningful, accurate, and secure patient health data. For the near term, DOD and VA have decided to reconsider their approach to GCPR and focus on allowing VA to access selected health data on service members captured by DOD. According to DOD and VA officials, full operation is expected to begin the third quarter of this fiscal year, once testing of the near-term system has been completed. DOD health information is an especially critical information source given VA’s fourth mission to provide medical backup to the military health system in times of national emergency and war. Under the near-term effort, VA will be able to access laboratory and radiology results, outpatient pharmacy data, and patient demographic information. This approach, however, will not provide VA access to information on the health status of personnel when they enter military service; on medical care provided to Reservists while not on active duty; or on the care military personnel received from providers outside DOD, including TRICARE providers. In addition, because VA will only be able to view this information, physicians will not be able to easily organize or otherwise manipulate the data for quick review or research. DOD has several other initiatives for improving its information technology capabilities, which are in various stages of development. For example, DOD is developing the Theater Medical Information Program (TMIP), which is intended to capture medical information on deployed personnel and link it with medical information captured in the Department’s new medical information system. As of October 2001, officials told us that they planned to begin field testing for TMIP in spring 2002, with deployment expected in 2003. A component system of TMIP— Transportation Command Regulating and Command and Control Evacuation System—is also under development and aims to allow casualty tracking and provide in-transit visibility of casualties during wartime and peacetime. Also under development is the Global Expeditionary Medical System (GEMS), which DOD characterizes as a stepping stone to an integrated biohazard surveillance and detection system. In addition to its ongoing information technology initiatives, DOD recently issued two major policies for advancing its military medical surveillance system. Specifically, in December 2001, DOD issued clinical practice guidelines, developed collaboratively with VA, to provide a structure for primary care providers to evaluate and manage patients with deployment- related health concerns. According to DOD, the guidelines were issued in response to congressional concerns and IOM’s recommendations. The guidelines are expected to improve the continuity of care and health-risk communication for service members and their families for the wide variety of medical concerns that are related to military deployments. Because the guidelines became effective January 31, 2002, it is too early for us to comment on their implementation. Finally, DOD issued updated procedures on February 1, 2002, for deployment health surveillance and readiness. These procedures supersede those laid out in DOD’s December 1998 memorandum. The 2002 memorandum adds important procedures for occupational and environmental health surveillance and updates pre- and postdeployment health assessment requirements. These new procedures take effect on March 1, 2002. According to officials from DOD’s Health Affairs office, military medical surveillance is a top priority, as evidenced by the Department’s having placed responsibility for implementing medical surveillance policies with one authority—the Deputy Assistant Secretary of Defense for Force Health Protection and Readiness. However, these officials also characterized force health protection as a concept made up of multiple programs across the services. For example, we learned that each service is responsible for implementing DOD’s policy initiatives for achieving force health protection goals. This raises concerns about how the services will uniformly collect and share core information on deployments and how they will integrate data on the health status of service members. These officials also confirmed that DOD’s military medical surveillance policies will depend on the priority and resources dedicated to their implementation. Clearly, the need for comprehensive health information on service members and veterans is compelling, and much more needs to be done. However, it is also a very difficult task because of uncertainties about what conditions may exist in a deployed setting, such as potential military conflicts, environmental hazards, and the frequency of troop movements. Moreover, the outlook for successful surveillance is complicated by scientific uncertainty regarding the health effects of exposures and changes in technology that affect the feasibility of monitoring and tracking troop movements. While progress is being made, DOD will need to continue to make a concerted effort to resolve the remaining deficiencies in its surveillance system and be vigilant in its oversight. VA’s ability to perform its missions to care for veterans and compensate them for their service-connected conditions will depend in part on the adequacy of DOD’s medical surveillance system. For further information, please contact Cynthia A. Bascetta at (202) 512- 7101. Individuals making key contributions to this testimony included Ann Calvaresi Barr, Diana Shevlin, Karen Sloan, and Keith Steck. | The Department of Defense (DOD) and the Department of Veterans Affairs (VA) recently established a medical surveillance system to respond to the health care needs of both military personnel and veterans. A medical surveillance system involves the ongoing collection and analysis of uniform information on deployments, environmental health threats, disease monitoring, medical assessments, and medical encounters and its timely dissemination to military commanders, medical personnel, and others. GAO and others have reported extensively on weaknesses in DOD's medical surveillance capability and performance during the Gulf War and Operation Joint Endeavor. Investigations into the unexplained illnesses of Gulf War veterans revealed DOD's inability to collect, maintain, and transfer accurate data on the movement of troops, potential exposures to health risks, and medical incidents during deployment. DOD improved its medical surveillance system under Operation Joint Endeavor, which provided useful information to military commanders and medical personnel. However, several problems persist. DOD has several efforts under way to improve the reliability of deployment information and enhance its information technology capabilities. Although its recent policies and reorganization reflect a commitment to establish a comprehensive medical surveillance system, much needs to be done to implement the system. To the extent DOD's medical surveillance capability is realized, VA will be better able to serve veterans and provide backup to DOD in times of war. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
For retirees aged 65 and older, Medicare is typically the primary source of health insurance coverage. Medicare covers nearly 43 million beneficiaries. The program covers hospital care as well as physician office visits and outpatient services and, effective January 1, 2006, prescription drugs. Medicare beneficiaries may rely on private retiree health coverage through former employment or through individually purchased Medicare supplemental insurance (known as Medigap) to cover some or all of the costs Medicare does not cover, such as deductibles, copayments, and coinsurance. For 2004, the most recent data available, the Medicare Current Beneficiary Survey (MCBS) found that about one-third of Medicare-eligible beneficiaries obtained supplemental coverage from a former employer or union. Employment-based retiree health benefits are typically offered as a voluntary benefit to retirees, thereby giving sponsors of these benefits the option of decreasing or eliminating benefits. However, some sponsors may be prevented from making immediate changes to coverage because of union contracts, for example. Benefit surveys have found that the percentage of employers offering retiree health benefits has decreased, beginning in the early 1990s. For example, according to a series of surveys conducted by Mercer, the percentage of employers with 500 or more employees offering health insurance to Medicare-eligible retirees declined from 44 percent in 1993 to 29 percent in 2006, although this trend had leveled off from 2001 through 2006. (See app. I for more information on employment-based retiree health coverage.) Sponsors typically integrate their retiree health benefits with Medicare once retirees reach age 65, with Medicare as the primary payer and the sponsor as the secondary payer. Several types of integration occur between sponsors and Medicare. For example, some sponsors coordinate through a carve out approach, in which the sponsor calculates its normal benefit and then subtracts (or carves out) the Medicare benefit, generally leaving the retiree with out-of-pocket costs comparable to having the employment-based plan without Medicare. Another approach used by sponsors is full coordination of benefits, in which the plan pays the difference between the total health care charges and the Medicare reimbursement amount, often providing retirees complete coverage and protection from out-of-pocket costs. The provision of employment-based retiree health benefits may vary depending on a variety of factors, including whether the sponsor is in the private or public sector, and by industry type. The 2006 Kaiser Family Foundation and Health Research and Educational Trust (HRET) survey, for example, showed that 82 percent of state and local government employers with 200 or more employees offered coverage to retirees, compared with 35 percent of employers with 200 or more employees across all employer industries that offered coverage to retirees. Coverage can also differ between retirees under age 65 and those eligible for Medicare, although sponsors often cover both groups of retirees. For example, some sponsors offer retirees under age 65 a preferred provider organization plan but offer a fee-for-service plan for retirees eligible for Medicare. While the provision of employment-based retiree health benefits varies by employer size, plan type, industry, and whether retirees are Medicare-eligible, these benefits almost always include coverage of prescription drugs. MMA created a prescription drug benefit for beneficiaries, called Medicare Part D, which became effective January 1, 2006. This voluntary benefit is available to all Medicare beneficiaries and is the first comprehensive prescription drug benefit ever offered under the Medicare program. In January 2007 (the most recent data available) CMS reported that approximately 39 million beneficiaries were receiving prescription drug coverage from a combination of Medicare Part D, employment-based coverage, and other sources, such as the Department of Veterans Affairs. The drug benefit is offered primarily through two types of private plans created as a result of the MMA: stand-alone prescription drug plans (PDP) that supplement fee-for-service Medicare, and Medicare Advantage prescription drug (MA-PD) plans, such as coordinated care plans, that cover drugs and other Medicare benefits. To be in operation for 2006, prospective PDPs and MA-PD plans had to apply by March 2005 and were approved in September 2005. At a minimum, these plans were required to offer the standard Medicare Part D benefit or alternative coverage that was at least equal in value. According to the Kaiser Family Foundation, plans approved for 2006 often varied from the standard Part D benefit in benefit design and covered drugs. For example, although the standard Part D benefit had a $250 deductible for 2006, Kaiser reported that 58 percent of PDPs and 79 percent of MA-PD plans approved for 2006 had no deductible requirement. In 2007, a total of 1,875 PDPs are offered nationally across 34 PDP regions. The standard Medicare Part D benefit in 2007 has a $265 deductible (up from $250 in 2006) and 25 percent coinsurance up to an initial coverage limit of $2,400 in total drug costs ($2,250 in 2006), followed by a coverage gap in which enrollees pay 100 percent of their drug costs until they have spent $3,850 out of pocket ($3,600 in 2006). Thereafter, the plan pays approximately 95 percent of total drug costs. The standard benefit amounts are set to increase annually by the rate of per capita Part D spending growth. Assistance with drug benefit premiums and cost-sharing is available for certain low-income beneficiaries. The MMA resulted in several options for sponsors of employment-based retiree health plans to provide prescription drug coverage to Medicare- eligible retirees. These options are as follows: Retiree Drug Subsidy (RDS). Sponsors with plans ending in 2007 that offer prescription drug coverage that is actuarially equivalent to that under Part D can receive a federal tax-free subsidy equal to 28 percent of the allowable gross retiree prescription drug costs over $265 (up from $250 for plans ending in 2006) through $5,350 (up from $5,000 for plans ending in 2006), with a maximum subsidy of $1,423 per beneficiary for each individual eligible for Part D who is enrolled in the employment-based plan instead of Part D. Actuarial equivalence, which is attested to by a qualified actuary, is intended to certify that a retiree health benefit sponsor’s coverage is at least as generous as the standard Part D coverage. Sponsors must demonstrate actuarial equivalence to qualify for the RDS, and sponsors will only receive the RDS for those Medicare beneficiaries who do not enroll in the Part D benefit. Sponsors may opt to receive RDS payments on a monthly, quarterly, or annual basis. In order to receive the RDS, sponsors must apply to and receive approval from CMS. For 2007 and subsequent years, sponsors are required to apply for the RDS no later than 90 days prior to the beginning of the plan year. For example, sponsors that applied for a calendar year 2007 plan would have had to apply no later than midnight on October 2, 2006. Additional steps involved in applying for and receiving the RDS include submitting a qualified actuary’s attestation that the plan meets the RDS actuarial equivalence standard; certifying that the creditable coverage status of the plan has been or will be disclosed to plan participants and CMS; electronically submitting and periodically updating enrollment information about retirees and dependents; and electronically submitting aggregate data about drug costs incurred and reconciling costs at year-end. Provide Supplemental Coverage. Sponsors can set up their own separate plans that supplement, or wrap around, Part D coverage. Apply to Offer Own PDP or MA-PD Plan. Sponsors can apply to CMS to offer their own PDP or MA-PD plan for retirees. CMS has waived or modified Part D requirements added by the MMA that hinder the design of, the offering of, or the enrollment in a Part D plan offered by a sponsor. For example, CMS has issued guidance that allows sponsors to limit coverage to retirees only, whereas other Part D plans must offer coverage to all eligible individuals residing within a certain location. Contract with a PDP or MA-PD Plan. Sponsors can contract with a PDP or MA-PD plan to offer the standard Part D prescription drug benefits or enhanced benefits to the sponsors’ retirees who are eligible for Medicare. For example, an enhanced benefit could allow retirees to pay a lower deductible or lower copayment than the standard Part D benefit requires. As with the previous MMA option, CMS has waived or modified Part D requirements that hinder the design of, the offering of, or the enrollment in these types of arrangements. Payment of Part D Premiums. Sponsors can pay for some or all of the Part D premiums for their eligible retirees. According to survey data we reviewed, the majority of surveyed retiree health benefit sponsors reported that they continued to offer prescription drug coverage and accepted the RDS for 2006. Survey data also indicated that much smaller percentages of sponsors took other MMA options— such as offering supplemental, or wrap-around, prescription drug coverage or contracting with a PDP or MA-PD plan. According to survey data we reviewed, the majority of surveyed sponsors reported that they continued to offer prescription drug coverage and accepted the RDS for plans ending in 2006. However, the size of the reported majority differed across the surveys. For example, the 2006 Kaiser/Hewitt survey, which surveyed private sector employers that offered retiree health benefits and had 1,000 or more employees, found that 82 percent of these employers accepted the RDS for 2006. In contrast, the 2006 Mercer survey found that 51 percent of surveyed private and public employers that offered retiree health benefits and had 500 or more employees continued to offer prescription drug coverage and accepted the RDS for 2006. Another survey of state and local public sector sponsors that offered retiree health benefits found that 79 percent reported accepting the RDS for 2006. Similarly, a survey of multiemployer plan sponsors that offered retiree health benefits found that 71 percent reported accepting the RDS for 2006. According to representatives from both Kaiser/Hewitt and Mercer, the percentages of surveyed employers that reported accepting the RDS— 82 percent and 51 percent, respectively—may be different because the employers surveyed differed in size between the two surveys. According to the 2005 Mercer survey, smaller employers may have such a limited number of Medicare-eligible retirees that they do not believe the RDS would be worth the cost and administrative burden associated with applying for the RDS. Furthermore, experts we interviewed told us that a minimum of 50 to 100 retirees is needed to make it worthwhile for employers to apply for the RDS. Data from CMS show that more than 3,900 sponsors, representing approximately 7 million retirees, were approved for the RDS for 2006. The number of retirees represented by sponsors that year ranged widely, from 1 to 444,818, with a median of 174 retirees. According to CMS data, commercial and government sponsors made up approximately 70 percent of sponsors approved for the RDS and represented approximately 90 percent of retirees covered by the RDS for 2006. Nonprofit, religious, and union sponsors made up the remaining approximately 30 percent of sponsors and approximately 10 percent of retirees covered by the RDS for 2006. For 2007, the Kaiser/Hewitt survey reported that the majority of surveyed employers planned to take the RDS. Specifically, 78 percent of surveyed private sector employers that offered retiree health benefits and had 1,000 or more employees planned to take the RDS for 2007—compared with 82 percent that took the RDS for 2006. CMS preliminary data for 2007 showed that the number of sponsors approved for the RDS decreased somewhat from 2006, to about 3,600 sponsors. CMS officials indicated that the decrease in the number of sponsors between 2006 and 2007 could be explained by a combination of several factors, including mergers by sponsors offering retiree health benefits, differences in the time of year when data were extracted, and the movement of some sponsors from the RDS to other MMA options. According to CMS data, in 2007 the number of retirees represented by sponsors approved for the RDS continued to show a wide range as in 2006, from 1 to 196,840, with a median of 169 retirees. The percentage of sponsors approved for the RDS by sponsor type, such as commercial or government, remained relatively consistent from 2006 to 2007. (See table 1.) All of the surveys we reviewed reported much smaller percentages of sponsors taking MMA options other than the RDS for 2006. In these surveys, the percentage of sponsors that reported offering supplemental, or “wrap-around,” prescription drug coverage in 2006 ranged from 0 to 13 percent. For example, the Mercer survey of private and public employers that offered retiree health benefits and had 500 or more employees reported that 13 percent offered supplemental coverage in 2006. Similarly, among the surveys we reviewed, the percentage of sponsors that reported contracting with a PDP or MA-PD plan ranged from 3 percent to 7 percent. For example, the Kaiser/Hewitt survey reported that 3 percent of surveyed private sector employers that offered retiree health benefits and had 1,000 or more employees contracted with a PDP or MA-PD plan in 2006. In addition, CMS reported that few sponsors applied to offer their own PDP or MA-PD plan for 2006 and 2007. Specifically, CMS reported that for the 2006 and 2007 contract years, there were 10 approved sponsors that offered their own PDP and none that offered their own MA- PD plan. Public and private sponsors we interviewed reported considering a variety of factors when selecting MMA prescription drug coverage options. Sponsors cited factors such as whether they could offer the same retiree health benefits they offered prior to the MMA, their ability to save on costs, the ease of explaining the option to retirees, the administrative requirements associated with each option, and the extent of information available on the options. When making decisions about which, if any, MMA option to pursue, public sponsors we interviewed were affected by some factors that private sponsors did not face. Sponsors we interviewed told us that when selecting an MMA prescription drug coverage option, they considered the extent to which they would be able to continue to offer the same retiree health benefits they had offered before implementing the MMA option. In general, in order to implement most MMA options other than the RDS, sponsors would likely have to change the prescription drug benefits they offer. For example, sponsors that offer their own PDP or MA-PD plan must generally meet all CMS requirements for Part D plans, such as including specific categories of prescription drugs on their formularies. One sponsor we interviewed also told us that it did not consider the option of paying Part D premiums because that option alone would result in a reduction in the level of prescription drug coverage offered to retirees, compared with coverage offered through the sponsor. In contrast, sponsors that select the RDS option are able to offer the same retiree health benefits they offered prior to the MMA, as long as a sponsor’s coverage remains at least as generous as the standard Part D benefit, thus meeting the actuarial equivalence standard to qualify for the RDS. In addition, the final rule implementing the MMA prescription drug benefit that was published in January 2005 gave sponsors flexibility in terms of how they could meet the actuarial equivalence standard. Some of the sponsors and experts we interviewed credited this flexibility with allowing sponsors to meet actuarial equivalence without having to change the retiree health benefits they offered. For example, one sponsor told us that it was able to combine multiple benefit options to meet actuarial equivalence, which allowed the sponsor to collect the RDS for most retirees—including those paying the full cost of their coverage—without making changes to the benefits offered. Prior to the final rule, this sponsor did not plan on collecting the RDS for the group of retirees paying the full cost of coverage because the coverage would not have met the actuarial equivalence standard on its own. Most sponsors we interviewed told us that the ability to offer the same retiree health benefits they offered prior to the MMA was an advantage of the RDS. In addition, experts we interviewed reported that some sponsors are unable to change the benefits they offer in the short term because union contracts prevent them from doing so, thus making the RDS the only MMA option for which they likely would qualify. Sponsors reported that when selecting an MMA option, they considered how the various options would affect their ability to save on costs. While all of the MMA prescription drug coverage options may provide sponsors with an opportunity for cost savings, the amount of savings may vary based on a sponsor’s tax status. For example, in guidance to employers, CMS estimated that the average cost savings to a sponsor that offers its own PDP or MA-PD plan for 2006 would be close to $900 per participating retiree, and the average tax-free payment for sponsors that took the RDS would be $668 per participating retiree. Because RDS payments are tax- exempt, CMS estimates indicate that the relative value of savings from the RDS as compared with savings from offering a PDP or MA-PD plan would be greater for private, tax-paying sponsors than it would be for public, non-tax-paying sponsors. In addition, some sponsors said they considered the trade-off between the cost savings associated with the different MMA options and the effect the options would have on the prescription drug benefits sponsors would be able to offer. For example, depending on their tax status, some sponsors might save more money by taking the RDS, while others might save more by offering or contracting with a PDP or MA-PD plan. However, as previously discussed, while most MMA options likely require a change of benefits, the RDS allows sponsors to continue offering the benefits they offered prior to the implementation of the MMA as long as the benefit is at least actuarially equivalent to the Part D benefit. In one case, a sponsor we interviewed reported that it chose the RDS, even though the sponsor could have reduced costs by choosing one of the other MMA options. As one expert explained, the RDS allows sponsors to save money without significantly changing their retiree health plans. Sponsors also reported considering how easy it would be to explain an option to retirees. In particular, sponsors we interviewed told us that they considered how benefit changes made as a result of implementing the various MMA options would complicate communications with retirees. For example, one sponsor we interviewed indicated that a disadvantage of some MMA options was that they would require a great effort to communicate changes to retirees, who range in age from 50 to 105 and who might find benefit changes difficult to understand. Conversely, sponsors that take the RDS are able to preserve their benefit structure and may find it easier to communicate this option to retirees, according to CMS. In addition, depending on the option they choose, sponsors have to meet different MMA requirements for communicating information about the options to retirees. For example, sponsors that take the RDS are required to explain how their prescription drug coverage compares to the Medicare Part D benefit. In contrast, sponsors that offer their own PDP or MA-PD plan are required to meet more strict CMS communication requirements on Part D plans—such as developing and sending more detailed information about prescription drug coverage to retirees. According to CMS and the experts and sponsors we interviewed, each option has different administrative requirements—some of which take up a considerable amount of time and resources, so sponsors also considered these requirements when selecting an option. For example, according to CMS, sponsors that offer their own PDP or MA-PD plan are required to calculate “true out-of-pocket” costs and adjust premiums for low-income retirees, among other administrative requirements. One sponsor we interviewed that offered its own PDP for 2006 indicated that it took 11 full- time employees and 13 part-time employees over 15,000 hours to implement the PDP. Conversely, according to CMS, sponsors that take the RDS are not required to calculate true out-of-pocket costs, adjust premiums for low-income retirees, or meet many of the other administrative requirements required of other options. Some sponsors we interviewed told us that the RDS would be administratively easy or easier than other MMA options, although many reported some first-year implementation issues, such as issues in submitting a list of eligible retirees to CMS, which made administration of the RDS more difficult than originally anticipated. Sponsors also reported that the extent of available information regarding the MMA options at the time they needed to make decisions was a factor they considered in selecting an option. CMS did not approve PDP and MA- PD plans until September 2005—the same month in which sponsors had to apply for the RDS for plans ending in 2006. Some sponsors we interviewed reported that they did not have enough information about the PDP and MA-PD plan options at the time they had to make their decision for 2006. For example, one sponsor we interviewed that took the RDS told us that there were too many unknowns at the time it had to make its decision for 2006 and that if the sponsor wanted to make changes to its retiree health benefits, it would need to provide a transition period for retirees in order to prepare them for plan changes. In addition, according to the 2005 Mercer survey, the timing of the plan and rate information available from health plans in the Medicare market was a key factor that led many employers to seek the RDS or to delay taking any action for 2006. When selecting an option for 2007, sponsors we interviewed continued to have concerns about the extent of the information available about the PDP and MA-PD plans. For example, one sponsor we interviewed told us that while there was better information available when it had to make its decision for 2007 compared with 2006, the sponsor still did not have a full year’s worth of data on PDPs when it had to make its decision. As the markets for PDPs and MA-PD plans mature and more detailed information becomes available, the availability of information on the various MMA prescription drug coverage options may become less of a factor in future years. According to one expert we interviewed, when employers are making their decisions for 2008, there should be a full year of information on the MMA prescription drug coverage options so that sponsors will be able to make more fully informed decisions. When making decisions about which, if any, MMA option to pursue, public sponsors may have to consider some factors that private sponsors do not. For example, some public sponsors may be influenced at the state level to either offer health insurance or choose a certain MMA option. One public sponsor we interviewed was directed by the budget committee of its state legislature to take the RDS in 2007 even though the sponsor—a state retirement system—had concluded that contracting with a PDP would allow the sponsor to decrease premiums for the state, contracting agencies, and some enrollees; decrease prescription drug copayments for enrollees; or both. As we stated earlier in this report, CMS estimates indicate that the relative value of savings from the tax-free RDS, as compared with savings from offering a PDP or MA-PD plan, for example, would be greater for private, tax-paying sponsors than it would be for public, non-tax-paying sponsors. In addition, OPM, which administers FEHBP, opted to continue offering prescription drug coverage to retirees without taking the RDS or any of the other MMA options. We reported previously that OPM officials told us OPM did not apply for the RDS for FEHBP because they said the intent of the RDS was to encourage sponsors to continue offering prescription drug coverage to enrolled Medicare beneficiaries, which all FEHBP plans were already doing. As such, OPM officials told us, the government would be subsidizing itself to provide coverage for prescription drugs to Medicare-eligible federal employees and retirees. Sponsors’ decisions regarding the various MMA options appear to have resulted in the provision of retiree health benefits remaining relatively unchanged in the short term, although the effect over the longer term on the provision of health benefits to retirees is unclear. The short-term effect of sponsors’ decisions appears to have resulted in benefits remaining relatively unchanged, in part because the majority of sponsors continued to offer prescription drug benefits and accepted the RDS during the first 2 years this option was offered. In addition, according to the 2005 Mercer survey, 72 percent of employers with 500 or more employees reported that the MMA options would have no effect on their ability to provide retiree health coverage. Similarly, many sponsors we interviewed told us that they did not make changes to their retiree health benefits—including decreasing coverage—in direct response to the MMA. Only one of the sponsors we interviewed that selected the RDS for 2006 reported making any changes to its benefits to meet the RDS actuarial equivalence standard. This sponsor told us it eliminated one of its plans that did not meet CMS’s actuarial equivalence standard for the RDS, but the sponsor said it moved all affected Medicare-eligible retirees into coverage that did qualify for the RDS. In addition, some sponsors we interviewed told us that they shared part of the subsidy they received from accepting the RDS with retirees by reducing retiree premiums. Furthermore, the 2005 Mercer survey reported that only 3 percent of employers with 500 or more employees indicated they were likely to terminate drug coverage for Medicare-eligible retirees—rather than choosing one of the MMA options—in response to the availability of the Part D prescription drug benefit. While, in the short term, sponsors’ decisions regarding the MMA options resulted in benefits remaining relatively unchanged, the effect over the longer term of sponsors’ decisions on the provision of employment-based retiree health benefits is unclear. Experts we interviewed differed in their assessments of what the effect is likely to be over the longer term. In particular, some experts we interviewed indicated that the MMA may extend the amount of time that sponsors offer benefits without reducing coverage. Furthermore, one sponsor we interviewed indicated that the RDS increased the number of years that its retiree health benefits program would be solvent. On the other hand, other experts said that it was possible that the availability of the Medicare Part D benefit may make it more likely that sponsors will stop offering prescription drug benefits for retirees. Nearly all experts we interviewed told us that it was unlikely that an employer or other potential sponsor that did not offer retiree prescription drug coverage prior to the MMA would begin sponsoring these benefits in response to the new options resulting from the MMA. According to experts, employers are not planning to improve or expand retiree health coverage and do not want the additional financial liability of providing these benefits. Furthermore, it is unclear to what extent sponsors will continue to select the same MMA option in the future. For example, the 2006 Kaiser/Hewitt survey reported that of those respondents that accepted the RDS for 2006, only 54 percent said they were very or somewhat likely to accept the RDS for 2010. Furthermore, 25 percent said they did not know whether they would accept the RDS for 2010. Most of the sponsors that we interviewed that took the RDS for 2006 and planned to take the RDS for 2007 said they were unsure which option they would be taking for 2008. The 2006 Kaiser/Hewitt survey also reported that employers that are unlikely to take the RDS in the future are considering a number of other MMA options, including contracting with a PDP to offer enhanced coverage. To the extent that sponsors that have accepted the RDS select other MMA options in subsequent years, sponsors’ provision of retiree health benefits may change. In addition to the MMA options, a host of other long-standing factors may affect a sponsor’s provision of health benefits to retirees. These include the existence of union contracts that may require the provision of certain health benefits, increasing costs for health care, the degree of industry competition, and the strength of sponsors’ financial conditions. For example, in 2005 we reported that sponsors that negotiated retiree health benefits with unions might not have as much flexibility to change these benefits prior to negotiations. Sponsors we interviewed also cited the competitiveness of the industry as another factor that affected retiree coverage, with one sponsor stating that it strove to have benefit packages that were in line with the overall market as well as the specific industry. We provided a draft of this report to CMS and experts on retiree health benefits at the Employee Benefit Research Institute, Hewitt Associates, Mercer Health & Benefits, and the National Opinion Research Center. In its written comments on a draft of this report, CMS stated that the report provided an excellent summary of available information concerning the choices sponsors made among MMA options. (CMS’s comments are included in app. IV.) CMS agreed with the finding that the majority of sponsors reported continuing to offer prescription drug coverage and accepting the RDS for 2006, with smaller percentages of sponsors reporting selecting other MMA options. In commenting on the draft report’s identification of several factors that may have contributed to the differences in the surveys’ reported percentages of employers accepting the RDS for 2006, CMS suggested an additional factor that may have contributed to the differences in the survey finding. Specifically, CMS said that some of the surveys reported what sponsors said they intended to do or were considering doing at the time of the survey, and it was possible that a portion ultimately decided not to pursue those options. However, both the 2006 Kaiser/Hewitt survey—which reported that 82 percent of surveyed employers accepted the RDS for 2006—and the 2006 Mercer survey— which reported that 51 percent of surveyed employers accepted the RDS for 2006—were reporting decisions surveyed employers said they had already made, not what they planned to do. Therefore, it is not likely this factor would explain the difference in the survey results. CMS also agreed with the draft report’s related finding regarding the number of sponsors participating in the RDS program. CMS suggested that we identify the 2007 data as preliminary, since it was compiled in February. We have made this clarification to the final report. CMS stated that it agreed with the report’s second finding, that sponsors considered a variety of factors when selecting which MMA prescription drug coverage options to pursue, with one clarification. The draft report stated that, in general, in order to implement most MMA options other than the RDS, sponsors would likely have to change the prescription drug benefits they offer. CMS stated that the report did not fully acknowledge that CMS has used its statutory waiver authority for several MMA options to afford flexibility in benefit design, and as a result, MMA options may require minimal (if any) adjustments to premiums, cost-sharing, and other primary elements of benefit design. The draft report did describe CMS’s authority to waive or modify Part D requirements added by the MMA that hinder the design of, offering of, or enrollment in certain employer- or union-sponsored Part D retiree plans. In response to CMS’s comments, we have included additional information clarifying that CMS has waived or modified Part D requirements for multiple MMA options. However, while CMS has used this waiver authority, our report notes that sponsors may still need to make changes to benefits—such as changing the drugs included on their formularies—and, according to sponsors we interviewed, any changes to benefits can complicate communications with retirees. CMS also agreed with the draft report’s finding that in the short term sponsors’ decisions regarding MMA options resulted in benefits remaining relatively unchanged, but over the longer term the effect is unclear. However, CMS stated that the examples of differing experts’ assessments of the likely effect over the longer term lacked sufficient context to be included in the findings. CMS also stated that there was no indication in the finding of the preponderance of expert opinion in favor of one or the other point of view. Our report states that the effect over the longer term is unclear and that experts we interviewed differed in their assessments of what the effect was likely to be. The report describes both the opinions of experts who said the MMA may extend the amount of time that sponsors offer benefits without reducing coverage and those who said the Medicare Part D benefit may make it more likely that sponsors will stop offering prescription drug benefits for retirees, and there was not a preponderance of opinion for either perspective. The experts who reviewed the draft report generally indicated that the report provided an accurate portrayal of employment-based retiree health benefits and sponsors’ decisions about the options available under the MMA. CMS and several of these experts also provided technical comments, which we incorporated into the report as appropriate. We will send copies of this report to the Administrator of CMS and interested congressional committees. We will also provide copies to others on request. In addition, the report is available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7119 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. This appendix describes information on employment-based retiree health coverage since the initial mandated GAO study, published in 2005. We reported in 2005 that the long-term decline in employment-based retiree health coverage had leveled off, and retirees were paying an increasing share of the costs. We reported that the percentage of employers offering health benefits to retirees, including those who are Medicare-eligible, had decreased beginning in the early 1990s, but had leveled off by the early 2000s. This leveling off has continued since the initial mandated study. We also reported in 2005 that the percentage of Medicare-eligible retirees aged 65 and older with employment-based coverage remained relatively consistent from 1995 through 2003. Since issuance of our 2005 report, we received data for 2004 and 2005 showing that the overall percentage remained relatively consistent from 2003 through 2005 at about 31 percent, although some modest changes occurred within specific age cohorts. Sponsors continued to respond to increasing costs by implementing strategies that required retirees to pay more for coverage and thus contributed to a gradual erosion of the value and availability of benefits. For example, one employer benefit survey reported that over half of surveyed employers reported increases in retiree contributions to premiums between 2005 and 2006. According to surveys of sponsors of retiree health benefits, the percentage of employers offering health benefits to retirees declined beginning in the early 1990s and then remained relatively stable by the early 2000s through 2005. In our 2005 report, we reported that a series of surveys conducted by Mercer Human Resource Consulting indicated that the percentage of employers with 500 or more employees offering health insurance to retirees who are eligible for Medicare declined from 1993 to 2001, although this decline had leveled off from 2001 through 2004. Data obtained after the publication of our 2005 report showed that this leveling- off trend continued, with approximately 29 percent of employers with 500 or more employees offering the benefits to Medicare-eligible retirees in 2006. (See fig. 1.) We also reported in our 2005 report that a series of surveys conducted by the Kaiser Family Foundation and Health Research and Educational Trust (HRET) estimated that the percentage of employers with 200 or more employees offering retiree health coverage decreased from 46 percent in 1991 to 36 percent in 1993. This decline leveled off from 1993 through 2004, with approximately 36 percent of employers with 200 or more employees offering coverage to these groups in 2004. Data obtained after our 2005 report showed that this trend continued. According to the Kaiser/HRET survey, approximately 33 percent and 35 percent of employers with 200 or more employees offered retiree health benefits in 2005 and 2006, respectively. For Medicare-eligible retirees specifically, the percentage of employers reporting that they offered health benefits to this group has generally not changed since our 2005 report, in which we reported that 27 percent of employers with 200 or more employees offered coverage, according to Kaiser/HRET. (See fig. 2.) For retirees under age 65, we reported in our 2005 report that coverage showed a steady decline from 1993, when 50 percent of employers with 500 or more employees offered coverage to this group of retirees, to 2001, although this percentage generally leveled off from 2001 through 2004. New data reported by Mercer showed that 39 percent of employers with 500 or more employees offered coverage to these retirees in 2006. The survey data we reviewed for our current report indicated that some types of employers are more likely to provide health benefits to retirees than others. Data on retiree health coverage showed that larger employers, for example, are more likely than smaller employers to offer coverage to retirees, including Medicare-eligible retirees. The 2006 Mercer survey reported that 56 percent of employers with 20,000 or more employees offered coverage to Medicare-eligible retirees, compared with about 22 percent of employers with 500 to 999 employees. The 2006 Kaiser/HRET survey also showed that 54 percent of employers with 5,000 or more employees offered health benefits to retirees, while 35 percent of employers with 200 or more employees offered health benefits to retirees. For smaller employers in the Kaiser/HRET survey—those with 3 to 199 employees—approximately 9 percent offered retiree health benefits. These data are similar to the data reported in our 2005 report, although the percentage of employers with 5,000 or more employees offering health benefits to retirees is slightly lower than in previous surveys. In addition, employers with a union presence continued to be more likely to offer retiree health coverage than employers without a union presence. For example, in the 2006 Kaiser/HRET survey, among employers with 200 or more employees, 50 percent of those employers that had union employees offered health coverage to retirees, compared with 27 percent without union employees. According to federal and employer benefit surveys, certain industries continued to be more likely to offer retiree health coverage than others. For example, the most recent Agency for Healthcare Research and Quality’s Medical Expenditure Panel Survey (MEPS) data showed that approximately 88 percent of state entities offered health insurance for retirees aged 65 and older. In addition, new data released from Kaiser/HRET in 2006 showed that 82 percent of state and local government employers with 200 or more employees offered coverage to retirees. Furthermore, these data are similar to the data reported in our 2005 report. Recent data released by Kaiser/HRET continued to list the transportation/communication/utility industry as the second likeliest industry, after government, to offer health benefits to its retirees, with 52 percent of all employers with 200 or more employees in this industry sector offering health benefits to their retirees. This survey also continued to show, as we reported in 2005, that the industries least likely to offer coverage were health care and retail, with 15 percent and 11 percent, respectively, of employers with 200 or more employees in these industry sectors offering retiree health benefits. In our 2005 report, we stated that the overall percentage of Medicare- eligible retirees and their insured dependents aged 65 and older obtaining employment-based health benefits through a former employer remained relatively consistent from 1995 through 2003, based on data from the U.S. Census Bureau’s Current Population Survey (CPS). Since issuance of that report, we received subsequent data for 2004 and 2005 showing that the overall percentage remained relatively consistent from 2003 through 2005, although some modest changes occurred within specific age cohorts (see fig. 3). According to our analysis of CPS data, for those aged 65 and older, approximately 32 percent had coverage in 1995 and approximately 31 percent had coverage in 2005 (no change from last report). Medicare- eligible retirees and their insured dependents for two groups—those aged 65 through 69 and those aged 80 and older—continued to show approximately the same modest decline and increase, respectively, in the percentage with employment-based health coverage. For those aged 70 through 79, the modest decline reported in our initial report was no longer statistically significant. According to employer benefit surveys and our interviews with sponsors and experts, sponsors have continued to rely on various strategies, as we noted in our 2005 report, for mitigating the increasing costs of providing health benefits to retirees that have contributed to a gradual erosion of the value and availability of health benefits. These strategies included the same strategies identified in our 2005 report: restricting retirees’ eligibility for health benefits; limiting sponsors’ contributions to retirees’ health benefits; and increasing retirees’ copayments, coinsurance, and premiums. Employers participating in the 2006 Kaiser/Hewitt Associates survey reported that between 2005 and 2006 they limited retiree eligibility for health benefits by restricting eligibility to certain groups of retirees and by increasing age, years of service, or both, needed to be eligible for such benefits. For example, according to 2006 Kaiser/Hewitt survey data, 11 percent of employers that currently offer retiree health benefits reported that they would not provide future employer-subsidized health benefits to a particular group of individuals, such as those hired after January 1, 2006, if they retire under the age of 65. Nine percent of the surveyed employers reported that they would not provide future employer- subsidized health benefits to a particular group of individuals if they retire at age 65 or older. In addition, 4 percent of surveyed employers reported that they raised the age requirements, years of service requirements, or both, for retiree health benefit eligibility for retirees under the age of 65, and 2 percent made such changes for retirees at age 65 or older. Similarly, one sponsor we interviewed told us about changes the sponsor had made to coverage for future retirees since our 2005 report. This sponsor told us about coverage beginning January 1, 2007, in which future retirees will have the option to receive a lump sum of money that can then be used to purchase coverage in the individual market at the time of retirement. Data from the 2006 Mercer survey showed that 20 percent of employers with 500 or more employees have implemented limits—often referred to as caps—on contributions to retirees’ health benefits. The survey data also showed that an additional 8 percent of such employers were considering such caps. Caps were most common among the employers in the Mercer study with the largest number of employees; 47 percent of employers with 20,000 or more employees had implemented caps and 4 percent were considering implementing caps. Data from the 2006 Kaiser/Hewitt survey showed that 50 percent of employers with 1,000 or more employees reported having capped contributions to the health benefits for Medicare- eligible retirees. Of these employers, 61 percent reported hitting the cap and another 20 percent expected to hit the cap within the next 1 to 3 years. One sponsor we interviewed with financial caps in place but not yet reached told us that sponsors generally have two options once they reach these spending limits: (1) negotiate plan design changes to bring spending under the limits or (2) pass costs on to retirees through higher premiums. More than one-fourth of employers participating in the 2006 Kaiser/Hewitt survey reported that between 2005 and 2006 they increased required out- of-pocket contributions from retirees and increased the use of other cost- sharing strategies. In addition, some of these strategies were intended to address the costs of providing prescription drug coverage to retirees. For example, according to the 2006 Kaiser/Hewitt survey, 25 percent of employers raised copayments or coinsurance for prescription drugs for retirees aged 65 and older, and 10 percent of employers replaced fixed dollar copayments for prescription drugs with coinsurance, which can increase retirees’ out-of-pocket expenses as the total cost of the benefit rises. More than one-half of employers in the 2006 Kaiser/Hewitt survey also reported that between 2005 and 2006 they increased retiree contributions to health care premiums for retirees aged 65 and older. However, the survey reported a lower rate of increase in the amount that retirees aged 65 and older contributed to premiums as compared to the amount that retirees under age 65 contributed to premiums, which the survey largely attributed to the Medicare Part D program. Sponsors we interviewed also told us that they had increased retiree premiums to compensate for the trend in increasing health care costs. For example, one public sponsor told us that premiums for its coverage designed for active workers and retirees under the age of 65 increased 9 percent for 2005 and 2006. Finally, according to the 2006 Mercer survey, about 41 percent of retiree health plans for employers with 500 or more employees required Medicare- eligible retirees to pay the full cost of their employment-based health benefits plan. The Medicare Prescription Drug, Improvement, and Modernization Act of 2003 (MMA) required GAO to describe both (1) alternative approaches to providing employment-based retiree health coverage suggested by retiree health benefit sponsors and (2) recommendations by sponsors and other experts for improving and expanding such coverage. In this appendix we present a range of alternative approaches to providing employment-based retiree health coverage and options for expanding and improving these alternative approaches, as described by retiree health benefit sponsors and experts we interviewed. To obtain this information, we interviewed officials from 15 private and public sponsors of retiree health benefits and several experts on areas relating to the provision of employment-based retiree health coverage, including five benefit consulting firms; six organizations, including one representing unions, one representing multiemployer plans, two representing large employers, and two representing health plans; one professional organization for actuaries; and other research organizations. The alternative approaches we describe are not intended to be a comprehensive list but rather represent the approaches that were mentioned by the sponsors and experts we spoke with. Many of the alternative approaches to providing employment-based retiree health coverage that were described to us rely on tax advantages that provide an incentive for a sponsor, an employee, or both to set aside funds for future health care needs. Some of these tax-advantaged approaches are made available as part of consumer-directed health plans, which usually consist of a savings account—such as a health savings account (HSA) or health reimbursement arrangement (HRA)—and a health plan with a high deductible. In addition to consumer-directed health plans, there are other tax-advantaged accounts and trusts that do not require enrollment in a high-deductible health plan, such as a voluntary employees’ beneficiary association (VEBA). Some sponsors and experts described a third category of arrangement, generally without tax advantages, that assists sponsors in providing retiree health care coverage, such as establishing savings accounts that provide a sponsor’s match to the employee’s contribution. Although there is no requirement that retiree health benefit sponsors prefund their retiree health benefit plans, many of the approaches sponsors and experts described are prefunded vehicles— wherein the sponsor directly contributes, rather than earmarks, dedicated funds to an account or trust. The alternative approaches these sponsors and experts described are listed in table 2. In addition to describing examples of the alternative approaches to traditional employment-based retiree health coverage, sponsors and experts we interviewed provided a variety of recommendations for improving and expanding these approaches. For example, some sponsors and experts recommended permitting tax-advantaged contributions by Medicare-eligible retirees to HSAs and allowing stand-alone HSAs that do not require an accompanying high-deductible health plan. Another expert also suggested increasing the maximum annual contribution that is currently allowed for an HSA and expanding the ability of retirees to use HSA funds to pay for health insurance premiums. One sponsor we interviewed highlighted the increased portability of an HSA as a factor in the sponsor’s decision to stop offering an HRA at the end of 2006 and to begin instead to offer an HSA option for early retirees and active workers. In addition, according to one expert we interviewed, because sponsors are not required to make unused HRA balances available to employees when they change jobs, individuals may have an incentive to spend down accumulated funds. Several sponsors and other experts also suggested creating additional tax-advantaged arrangements for retiree health benefit sponsors. For example, one expert suggested allowing the tax-free transfer of funds from individual tax-preferred vehicles—such as 401(k) retirement accounts—and pensions to pay for health care costs, including health care premiums. Overall, a majority of the sponsors we interviewed indicated that sponsors are willing to use or consider alternative approaches, such as the ones described above, to assist retirees with their future health care needs without increasing their costs. Indeed, one sponsor indicated that it would support anything that would expand its ability to offer and fund retiree health coverage, such as additional subsidies or favorable tax treatment. Moreover, one expert indicated that alternative approaches such as HSAs offer a level of predictability that allows sponsors to sustain their retiree benefit packages. One reason for this predictability is that contributions by the sponsor in many of these alternative approaches are limited to a defined contribution. Most alternatives that sponsors and experts described in our interviews were established (or are currently under consideration) for active employees to use for current and future expenses rather than for those who are currently retired. For example, among the alternative approaches described, few of the sponsors we interviewed indicated that they make such approaches available to current retirees. Specifically, only one sponsor we interviewed told us that it makes consumer-directed health plans available to current retirees. Seven sponsors told us that their current use (or consideration) of consumer-directed health plans is targeted to active employees for current and future health care costs. Two experts we interviewed, however, noted flaws with using consumer- directed health plans as adequate savings mechanisms for retiree health care costs because this approach assumes that active employees will not need the account funds for current health care expenses. Similarly, one sponsor noted that because many of the alternative approaches are geared toward active employees, they were less likely to be effective solutions for retiree health care needs. This appendix describes in detail the scope and methodology used to address the three report objectives—(1) which MMA prescription drug coverage options sponsors selected, (2) the factors they considered in selecting these options, and (3) the effect these decisions may have on sponsors’ provision of employment-based health benefits for retirees. It also addresses the mandated update on employment-based retiree health coverage since our 2005 report (reported in app. I) and sponsors’ and others’ views on alternative approaches for the provision of employment- based retiree health coverage that may help maintain, expand, or improve retiree health coverage (reported in app. II). Because some of the methodologies apply to more than one objective or appendix, we have organized this appendix by data source. Specifically, this appendix briefly describes the methodologies by objective and then discusses (1) surveys of employment-based health benefits, (2) federal surveys, (3) data from the Centers for Medicare & Medicaid Services (CMS), and (4) interviews with sponsors and other experts. To determine which MMA prescription drug coverage options sponsors selected, we reviewed data from four surveys collected by three benefit consulting firms on the options that sponsors reported selecting for 2006 and the options that sponsors reported that they planned to select for 2007. One survey is an annual survey of employer health benefits, including private and public sector employers, conducted since the early 1990s through 2006, and one is a private sector survey on retiree health benefits conducted in 2006. We obtained and analyzed data provided by CMS on the number and characteristics of sponsors that were approved for the retiree drug subsidy (RDS) for plans ending in 2006 and 2007. To describe the factors that sponsors considered in selecting the MMA options and the effect their decisions about the options may have on the provision of benefits for retirees, we relied on two of the employer benefit surveys and reviewed documents from the literature on the factors that sponsors may consider in selecting the MMA options. We also interviewed private and public sponsors and experts on sponsors’ decisions regarding the MMA options and employment-based retiree health benefits, including benefit consultants and officials at health plans, groups representing large employers, and other organizations. To update information on employment-based retiree health coverage since our 2005 report, we reviewed data from employer benefit surveys and data from three large federal surveys that contained information either on Medicare beneficiaries or on the percentage of public sector employers that offer retiree health benefits. We also obtained this information in our interviews with sponsors and experts. We focused on trends particularly affecting Medicare-eligible retirees, but in some cases when information specific to Medicare-eligible beneficiaries was not available, we reported on trends affecting all retirees, including those who were under age 65 and those who were eligible for Medicare. To describe alternative approaches for the provision of employment-based retiree health coverage, we reviewed data from several of the same sources used to address the other report objectives, including employer benefit surveys, reports and analyses from the literature, and interviews with sponsors and experts. We relied on data from annual surveys of employment-based health benefit plans. Kaiser/HRET and Mercer each conduct an annual survey of employment-based health benefits, including a section on retiree health benefits. Each survey has been conducted for at least the past decade, including 2006. We also used data from a survey focused solely on retiree health benefits that Kaiser/Hewitt conducted in 2006. For each of these surveys of employment-based benefits, we reviewed the survey instruments and discussed the data’s reliability with the sponsors’ researchers and determined that the data were sufficiently reliable for our purposes. We also reviewed two 2006 surveys by The Segal Company. The first represented a nonrandom sample of multiemployer plans from a range of industries and geographic regions; the second collected data from a nonrandom sample of public sponsors that offered prescription drug coverage to Medicare-eligible retirees. Since 1999, Kaiser/HRET has surveyed a sample of employers each year through telephone interviews with human resource and benefits managers and published the results in its annual report—Employer Health Benefits. Kaiser/HRET selects a random sample from a Dun & Bradstreet list of private and public sector employers with three or more employees, stratified by industry and employer size. It attempts to repeat interviews with some of the same employers that responded in prior years. For the most recently completed annual survey—conducted from January to May 2006 and published in September 2006—2,122 employers responded to the full survey, giving the survey a 48 percent response rate. In addition, Kaiser/HRET asked at least one question of all employers it contacted— ”Does your company offer or contribute to a health insurance program as a benefit to your employees?”—to which an additional 1,037 employers, or cumulatively about 72 percent of the sample, responded. By using statistical weights, Kaiser/HRET is able to project its results nationwide. Kaiser/HRET uses the following definitions for employer size: (1) small—3 to 199 employees—and (2) large—200 and more employees. In some cases, Kaiser/HRET reported information for additional categories of small and large employer sizes. Since 1993, Mercer has surveyed a stratified random sample of employers each year through mail questionnaires and telephone interviews and published the results in its annual report—National Survey of Employer- Sponsored Health Plans. Mercer selects a random sample of private sector employers from a Dun & Bradstreet database, stratified into eight categories, and randomly selects public sector employers—state, county, and local governments—from the Census of Governments. The random sample of private sector and government employers represents employers with 10 or more employees. For the 2006 survey, which was published in 2007, Mercer mailed questionnaires to employers with 500 or more employees in July 2006 along with instructions for accessing a Web-based version of the survey instrument, another option for participation. Employers with fewer than 500 employees, which, according to Mercer, historically have been less likely to respond using a paper questionnaire, were contacted by phone only. Telephone follow-up was conducted with employers with 500 or more employees in the random sample and some mail and Web respondents were contacted by phone to clear up inconsistent or incomplete data. A total of 2,136 employers responded to the complete survey, yielding a response rate of 24 percent. By using statistical weights, Mercer projects its results nationwide and for four geographic regions. The Mercer survey report contains information for large employers—500 or more employees—and for categories of large employers with certain numbers of employees as well as information for small employers (fewer than 500 employees). We have excluded from our analysis Mercer’s 2002 data on the percentage of employers that offer retiree health plans because Mercer stated in its 2003 survey report that the 2002 data were not comparable to data collected in other years because of a wording change on the 2002 survey questionnaire. In 2003, Mercer modified the survey questionnaire again to make the data comparable to prior years (except 2002). The 2006 Kaiser/Hewitt study—Retiree Health Benefits Examined: Findings from the Kaiser/Hewitt 2006 Survey on Retiree Health Benefits—is based on a nonrandom sample of employers because there is no database that identifies all private sector employers offering retiree health benefits from which a random sample could be drawn. Kaiser/Hewitt used previous Hewitt survey respondents and its proprietary client databases, which list private sector employers potentially offering retiree health benefits. Kaiser/Hewitt conducted the survey online from June through October 2006 and obtained data from 302 large (1,000 or more employees) employers. Its results were published in December 2006. According to the survey, these employers represented 36 percent of all Fortune 100 companies and 22 percent of all Fortune 500 companies. They accounted for more than one quarter of the Fortune 100 companies with the largest retiree health liability in 2005. Because the sample is nonrandom and does not include the same sample of companies and plans each year, survey results for 2006 cannot be compared with results from prior years. We reviewed two nonrandom surveys conducted and published by The Segal Company in 2006 that report on responses by non-private-sector sponsors to the availability of prescription drug coverage under Medicare Part D. The first survey, which was published in spring 2006, was based on data collected in January and February 2006 from a nonrandom sample of 273 multiemployer plans that provided prescription drug coverage to Medicare-eligible retirees. The 273 multiemployer plans that participated in the survey are Segal clients and, according to Segal, represented a range of industries and geographic regions. The second survey, which was published in summer 2006, was conducted by Segal in conjunction with the Public Sector HealthCare Roundtable, a national coalition of public sector health care purchasers. This survey was based on data collected in May 2006 from a nonrandom sample of 109 public sponsors, including state and local sponsors, 82 of which offered prescription drug coverage to Medicare-eligible retirees. We analyzed three federal surveys containing information either on Medicare beneficiaries or on the percentage of public sector employers that offer retiree health benefits. We obtained information on retired Medicare beneficiaries’ sources of health benefits coverage—including former employers and unions—from the CPS, conducted by the U.S. Census Bureau for the Bureau of Labor Statistics. We obtained data on the sources of coverage for all health care expenditures and for prescription drug expenditures for retired Medicare beneficiaries from the Medicare Current Beneficiary Survey (MCBS), sponsored by CMS. We obtained data on the percentage of public sector employers that offer retiree health benefits from the Medical Expenditure Panel Survey (MEPS), sponsored by the Agency for Healthcare Research and Quality. Each of these federal surveys is widely used for policy research, and we reviewed documentation on the surveys to determine that they were sufficiently reliable for our purposes. We analyzed the Annual Supplement of the CPS for information on the demographic characteristics of Medicare-eligible retirees and their access to insurance. The survey is based on a sample designed to represent a cross section of the nation’s civilian noninstitutionalized population. In the 2006 CPS Annual Supplement, about 83,800 households were included in the sample for the survey, a significant increase in sample size from about 60,000 households prior to 2002. The total response rate for the 2006 CPS Annual Supplement was about 83 percent. We present only those differences that were statistically significant at the 95 percent confidence level. The CPS asked whether a respondent was covered by employer- or union- sponsored, Medicare, Medicaid, private individual, or certain other types of health insurance in the last year. The CPS questions that we used for employment status, such as whether an individual is retired, are similar to the questions on insurance status. Respondents were considered employed if they worked at all in the previous year and not employed only if they did not work at all during the previous year. The CPS asked whether individuals had been provided employment-based insurance “in their own name” or as dependents of other policyholders. We selected Medicare-eligible retirees aged 65 and older who had employment-based health insurance coverage in their own names because this coverage could most directly be considered health coverage from a former employer. For these individuals, we also identified any retired Medicare-eligible dependents aged 65 or older, such as a spouse, who were linked to this policy. We used two criteria to determine that these policies were linked to the primary policyholder: (1) the dependent lived in the same household and had the same family type as the primary policyholder and (2) the dependent had employment-based health insurance coverage that was “not in his or her own name.” MCBS is a nationally representative sample of Medicare beneficiaries that is designed to determine for Medicare beneficiaries (1) expenditures and payment sources for all health care services, including noncovered services, and (2) all types of health insurance coverage. The survey also relates coverage to payment sources. The MCBS Cost and Use file links Medicare claims to survey-reported events and provides expenditure and payment source data on all health care services, including those not covered by Medicare. We used the 2004 MCBS Cost and Use file, the most current data available, to determine the percentage of Medicare-eligible beneficiaries obtaining supplemental coverage from a former employer or union. We also used the MCBS data to determine the percentage of all health care expenditures for retired Medicare beneficiaries paid by employment-based insurance for prescription drug expenditures. MEPS consists of four surveys and is designed to provide nationally representative data on health care use and expenditures for U.S. civilian noninstitutionalized individuals. We used data from one of the surveys, the MEPS Insurance Component, to identify the percentage of state entities that offered retiree health benefits in 2004. Insurance Component data are collected through two samples. The first, known as the “household sample,” is a sample of employers and other insurance providers (such as unions and insurance companies) that were identified by respondents in the MEPS Household Component, another of the four surveys, as their source of health insurance. The second sample, known as the “list sample,” is drawn from separate lists of private and public employers. The combined samples provide a nationally representative sample of employers. The target size of the list sample is approximately 40,000 employers each year. We analyzed data provided by CMS on the number and characteristics of sponsors approved for the RDS for plans ending in 2006 and of sponsors approved for the RDS for plans ending in 2007. The data include selected variables from applications that were approved for the RDS. For plans ending in 2006, the CMS data are current as of September 11, 2006; for plans ending in 2007, the CMS data are current as of February 16, 2007. Based on conversations with CMS and data reliability checks that we performed, we have determined that these data were sufficiently reliable for our purposes. To learn more about retiree health benefit trends, the factors that sponsors considered in selecting the MMA options, the effect that sponsors’ decisions about the MMA options may have on the provision of health benefits for retirees, and alternative approaches for the provision of employment-based retiree health coverage, we interviewed 13 of the 15 private and public sector sponsors of employment-based retiree health benefits that we interviewed for the initial mandated study published in 2005. In our 2005 study, we interviewed officials of 12 Fortune 500 employers that provided retiree health benefits; the Office of Personnel Management, which administers the Federal Employees Health Benefits Program; and two state retirement systems. To select the 12 Fortune 500 employers in our 2005 study, we judgmentally selected 10 employers from a stratified random sample of 50 Fortune 500 employers. We interviewed at least 1 employer from each of the five groups of 100 Fortune 500 employers that were stratified on the basis of annual revenues. In addition to considering revenues, where data were available, we considered each employer’s industry, number of employees, postretirement benefit obligations, preliminary MMA option decision as reported on its annual financial statement, and union presence when making our selection. We also interviewed officials at two additional Fortune 500 employers at the recommendation of a benefit consultant. In our 2005 study, we judgmentally selected two large states’ retiree health benefits systems on the basis of a review of selected state data and referrals from a benefit consultant that works with public sector clients. For our current study, we also interviewed 2 sponsors that chose to offer their own Medicare Part D plans instead of implementing the RDS or another MMA option. These sponsors were not interviewed for our 2005 report. To obtain broader-based information about retiree health benefit trends, MMA options, and alternative approaches for the provision of employment-based retiree health coverage, we interviewed benefit consultants and other experts at several other organizations. Specifically, we interviewed representatives of five large employer benefit consulting firms. Benefit consultants help their clients, which include private sector employers, public sector employers, or both, develop and implement human resource programs, including retiree health benefit plans. While most of these benefit consulting firms’ clients were large Fortune 500 or Fortune 1,000 employers, some also had smaller employers as clients. One benefit consulting firm that we interviewed, in particular, provided actuarial, employee benefit, and other services to a range of public sector clients, including state and local governments, statewide retirement systems and health plans, and federal government agencies. It also provided consulting services to multiemployer plans. We also interviewed officials from the American Academy of Actuaries, America’s Health Insurance Plans and its members, AARP, the American Benefits Council, the BlueCross BlueShield Association and its members, the Employee Benefit Research Institute, the National Business Group on Health, and the National Coordinating Committee for Multiemployer Plans. Finally, we reviewed other available literature on retiree health benefit trends, factors affecting sponsors’ decisions about the MMA options, and alternative approaches for the provision of employment-based retiree health coverage. In addition to the contact named above, Kristi A. Peterson, Assistant Director; George Bogart; Kevin Dietz; Laura Sutton Elsberg; Krister Friday; Gregory Giusto; Elizabeth T. Morrison; Giao N. Nguyen; and Suzanne Worth made key contributions to this report. | The Medicare Prescription Drug, Improvement, and Modernization Act of 2003 (MMA) created a prescription drug benefit for beneficiaries, called Medicare Part D, beginning in January 2006. The MMA resulted in options for sponsors of employment-based prescription drug benefits, such as a federal subsidy payment--the retiree drug subsidy (RDS)--when sponsors provide benefits meeting certain MMA requirements to Medicare-eligible retirees. The MMA required GAO to conduct two studies on trends in employment-based retiree health coverage and the MMA options available to sponsors. The first study, Retiree Health Benefits: Options for Employment-Based Prescription Drug Benefits under the Medicare Modernization Act (GAO-05-205), was published February 14, 2005. In this second study, GAO determined which MMA prescription drug coverage options sponsors selected, the factors they considered in selecting these options, and the effect these decisions may have on the provision of employment-based health benefits for retirees. GAO identified options that sponsors selected using data from employer benefit surveys and the Centers for Medicare & Medicaid Services (CMS), the federal agency that administers Medicare. To obtain sponsors' views about the factors they considered and the effects of their decisions, GAO also interviewed private and public sector sponsors and experts. According to survey data GAO reviewed, a majority of retiree health benefit sponsors reported that for 2006 they continued to offer prescription drug coverage and accepted the RDS. However, the size of the reported majority differed across the surveys. For example, one survey of private sector sponsors with 1,000 or more employees found that 82 percent of these sponsors accepted the RDS for 2006. Another survey of private and public sponsors found that 51 percent of surveyed sponsors with 500 or more employees accepted the RDS for 2006. Data from CMS showed that more than 3,900 sponsors, representing about 7 million retirees, were approved for the RDS for 2006. According to the surveys GAO reviewed, much smaller percentages of sponsors reported selecting other MMA options for 2006. For 2007, according to one survey, 78 percent of surveyed employers reported that they planned to apply for the RDS for that year. CMS data showed that about 3,600 sponsors were approved for the RDS for 2007. Public and private sponsors GAO interviewed reported considering a variety of factors when selecting MMA prescription drug coverage options, including whether they could offer the same retiree health benefits they offered prior to the MMA and their ability to save on costs. In general, in order to implement most MMA options, sponsors would likely have to change the prescription drug benefits they offer. For example, sponsors that offer their own Medicare Part D plan must generally meet all CMS requirements for Part D plans, such as providing coverage for specific categories of prescription drugs. In contrast, sponsors that select the RDS option can offer the same retiree health benefits they offered prior to the MMA, as long as a sponsor's coverage remains at least actuarially equivalent to the standard Part D benefit. When deciding which, if any, options to pursue, public sponsors were affected by some factors that did not affect private sponsors. In the short term, sponsors' decisions regarding the MMA options appear to have resulted in benefits remaining relatively unchanged, in part because a majority of surveyed sponsors reported that they continued to offer prescription drug benefits and accepted the RDS the first 2 years the RDS was offered. Over the longer term, the effect of sponsors' decisions about the MMA options is unclear. For example, some experts GAO interviewed indicated that the MMA may extend the amount of time that sponsors offer benefits without reducing coverage, while other experts said the availability of the Medicare Part D benefit may make it more likely that sponsors will stop offering prescription drug benefits for retirees. In addition, it is unclear to what extent sponsors will continue to select the same MMA option in the future. To the extent that sponsors that have accepted the RDS select other MMA options, sponsors' provision of retiree health benefits may change. In commenting on a draft of this report, CMS and four experts agreed with the report's findings. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
DHS invests in a wide array of complex acquisitions to achieve its national security mission. DHS components and offices sponsor investments to address mission capability gaps and are the end-users of the developed acquisitions. DHS has stated that the Undersecretary for Management, as the Chief Acquisition Officer, is responsible for acquisition policy. The purpose of DHS’s investment review and budget processes are to provide oversight of these major investments. Specifically, DHS established the investment review process in 2003 to help protect its major investments by providing departmental oversight of major investments throughout their life cycles and to help ensure that funds allocated for investments through the budget process are being spent wisely, efficiently, and effectively. In 2005, we reported that this process adopted many acquisition best practices that, if applied consistently, could help increase the chances for successful outcomes. However, we noted that incorporating additional program reviews and knowledge deliverables into the process could better position DHS to make well-informed decisions. In 2007, we further reported that DHS had not fully defined and documented policies and procedures for investment management or fully implemented key practices needed to control its information technology (IT) investments. To strengthen DHS’s investment management capability, we recommended that the department fully define and document project and portfolio-level policies and procedures and implement key control processes. In addition to the investment review process, the DHS budget process serves as the framework for decision making for ongoing and future DHS programs. The framework is cyclic, consisting of planning, programming, budgeting, and execution phases that examine existing program funding and link the funding to program performance to ensure funds are expended appropriately and that they produce the expected results and benefits. The investment review process framework manages investment risk by developing an organized, comprehensive, and iterative approach to identifying; assessing; mitigating; and continuously tracking, controlling, and documenting risk tailored to each project. The investment review process has four main objectives: (1) identify investments that perform poorly, are behind schedule, are over budget, or that lack capability, so officials can identify and implement corrective actions; (2) integrate capital planning and investment control with resource allocation and investment management; (3) ensure that investment spending directly supports DHS’s mission and identify duplicative efforts for consolidation; and (4) ensure that DHS conducts required management, oversight, control, reporting, and review for all major investments. The process requires event-driven decision making by high-ranking executives at a number of key points in an investment’s life cycle. The investment review process provides guidance to components for all DHS investments, but it requires formal department-level review and approval only for major investments—those that are categorized as level 1 or 2 (see table 1). The investment review process has two types of reviews: programmatic and portfolio. Programmatic reviews are held at specific milestones and require documentation and discussion commensurate with the investment’s life cycle phase. These reviews contribute to the investment review goal of identifying investments that perform poorly, are behind schedule, are over budget, or that lack capability so officials can identify and implement corrective actions. Portfolio reviews are designed to identify efforts for consolidation and mission alignment by monitoring and assessing broad categories of investments that are linked by similar missions to ensure effective performance, minimization of overlapping functions, and proper funding. The IRB and JRC are responsible for reviewing, respectively, level 1 and level 2 investments at key milestone decision points, but no less than annually, and provide strategic guidance (see table 2). In addition to requiring department-level review, DHS policy directs component heads to conduct appropriate management and oversight of investments and establish processes to manage approved investments at the component level. The investment review process has three broad life cycle stages, covering five investment phases and four decision points or milestones (see fig. 1). In the preacquisition stage, gaps are to be identified and capabilities to address them defined. In the first phase of the acquisition stage—concept and technology development—requirements are to be established and alternatives explored. In the next phase—capability development and demonstration—prototypes are to be developed. In the final acquisition phase, the assets are produced and deployed. With the high dollar thresholds and inherent risk of level 1 and level 2 investments, IRB or JRC approval at milestone decision points is important to ensure that major investment performance parameters and documentation are satisfactorily demonstrated before the investment transitions to the next acquisition phase. IRB and JRC milestone reviews are not required once an investment reaches the sustainment phase. As designed, knowledge developed during each investment phase is to be captured in key documents and is to build throughout the investment life cycle. Performing the disciplined analysis required at each phase is critical to achieving successful outcomes. The main goals of the first investment phase, program initiation, are to determine gaps in capabilities and then describe the capabilities to fill the gap—this information is then captured in the mission needs statement. If the mission needs statement is approved, the investment then moves to the concept and technology development phase, which focuses both on setting requirements and important baselines for managing the investment throughout its life cycle. A key step in this phase is translating needs into specific operational requirements, which are captured in the operational requirements document. Operational requirements provide a bridge between the functional requirements of the mission needs statement and the detailed technical requirements that form the basis of the performance specifications, which will ultimately govern development of the system. Once the program has developed its operational requirements document, it then uses these requirements to inform the development of its acquisition program baseline, a critical document that addresses the program’s critical cost, schedule, and performance parameters and is expressed in measurable terms. See figure 2 for a description of the documents. The department’s budget policy has two main objectives: (1) articulate DHS goals and priorities, and (2) develop and implement a program structure and resource planning to accomplish DHS goals. DHS uses the process to determine investment priorities and allocate resources each year. The budget process emphasizes the importance of ensuring investments expend funds appropriately and that investment performance produces the expected benefits or results. IRB decisions and guidance regarding new investments are to be reflected to the extent possible in any iteration of the budget as appropriate. The Office of the Chief Financial Officer (CFO) manages the budget process. DHS has not effectively implemented or adhered to its investment review process due to a lack of involvement by senior officials as well as limited resources and monitoring; consequently, DHS has not identified and addressed cost, schedule, and performance problems in many major investments. Poor implementation largely rests on DHS’s inability to ensure that the IRB and JRC effectively carried out their oversight responsibilities. Of 48 major investments requiring department-level review, 45 were not reviewed in accordance with the department’s investment review policy, and 18 were not reviewed at all. In the absence of IRB and JRC meetings, investment decisions were reached outside of the required review process. Moreover, when IRB meetings were held, DHS did not consistently enforce decisions that were reached because the department did not track whether components and offices took the actions required by the IRB. In addition, 27 major investments have not developed or received DHS approval for basic acquisition documents required to guide and measure the performance of program activities—and the investment review process. Of those, over a third reported cost, schedule, or performance breaches in fiscal year 2007 and second quarter fiscal year 2008. According to DHS representatives, acquisition management practices are still new to many components, and we found 24 investments lacked certified program managers needed to develop basic acquisition documents. We found that two out of nine components do not have required component-level review processes to adequately manage their major investments. DHS has recognized these deficiencies and began efforts in 2007 to clarify and better adhere to the investment review process. Of DHS’s 48 major investments requiring department-level review between fiscal year 2004 and the second quarter of fiscal year 2008, only three had all milestone and annual reviews. Of the 39 level 1 investments requiring IRB review and approval to proceed to the next acquisition phase, as of March 2008, 18 have never been reviewed by the IRB—4 of which have already reached production and deployment. The remaining 21 level 1 investments received at least one milestone or annual review through the investment review process. None of the 9 level 2 investments had JRC review and approval. DHS policy provides that its major investments be reviewed no less than yearly. However, in fiscal year 2007, the most recent year for which data were available, only 7 of the 48 required annual reviews were conducted. As a result, DHS lacked the information needed to address cost, schedule, and performance deficiencies—a problem we identified with over one-third of DHS’s major investments between fiscal year 2007 and the second quarter of fiscal year 2008. In our prior work on the Department of Defense (DOD), we found that when such reviews are skipped or not fully implemented, programs build momentum and move toward product development with little if any early department-level assessment of the costs and feasibility. Committing to programs before they have this knowledge contributes to poor cost, schedule, and performance outcomes. DHS level 1 investments that were never reviewed through the IRB process include some of the department’s largest investments with important national security objectives. For example, the Federal Emergency Management Agency’s (FEMA) Consolidated Alert and Warning System, which has estimated life-cycle costs of $1.6 billion, includes programs to update the Emergency Alerting System and other closely related projects. In 2007, we reported that FEMA faces technical, training, and funding challenges to develop an integrated alert and warning system. Customs and Border Protection’s (CBP) Secure Freight Initiative, which has estimated life-cycle costs of $1.7 billion, is designed to test the feasibility of scanning 100 percent of U.S.-bound cargo containers with nonintrusive equipment and radiation detection equipment at foreign seaports. Earlier this year, we reported that the Secure Freight Initiative faces a number of challenges, including measuring performance outcomes, logistical feasibility of some aspects of the investment, and technological issues. While these two investments are still in the concept and technology development phase, other major investments that have not been reviewed are even further along in the investment life cycle—when problems become more costly to fix. For example, CBP’s Western Hemisphere Travel Initiative, with estimated life-cycle costs of $886 million, is in capability development and demonstration. The investment aims to improve technologies to identify fraudulent documentation at U.S. ports of entry. We recently reported that because key elements of planning for the investment’s management and execution remain uncertain, DHS faces challenges deploying technology, and staffing and training officers to use it. Reviews of the 9 level 2 investments—those with acquisition costs between $50 million and $100 million, or $100 million to $200 million for information technology—were similarly lacking. While the JRC met periodically between fiscal years 2004 and 2006, senior officials stated that it did not make approval decisions about any level 2 investments. As a result, investments such as the following—which are all now in the operations and support phase—were not reviewed and approved by the JRC: FEMA’s Total Asset Visibility, which has $91 million in estimated life-cycle costs, aims to improve emergency response logistics in the areas of transportation, warehousing, and distribution. Transportation and Security Administration’s (TSA) Hazardous Threat Assessment Program, which has $181 million in estimated life-cycle costs, was developed to perform a security threat assessment on applicants for licenses to transport hazardous materials. The National Protection and Programs Directorate’s National Security and Emergency Preparedness investment, which has $1.8 billion in estimated life-cycle costs, aims to provide specially designed telecommunications services to the national security and emergency preparedness communities in the event of a disaster if conventional communication services are ineffective. During 2006, the JRC stopped meeting altogether after the chair was assigned to other duties within the department. DHS representatives recognized that since the JRC stopped meeting in 2006, there has been no direction for requirements or oversight of level 2 investments at the department level and that strengthening the JRC is a top priority. In the meantime, oversight of level 2 investments has devolved to the components. Without the appropriate IRB and JRC milestone reviews, DHS loses the opportunity to identify and address cost, schedule, and performance problems and, thereby, minimize program risk. Fourteen of the investments that lacked appropriate review through IRB and JRC oversight experienced cost growth, schedule delays, and underperformance—some of which was substantial. At least 8 investments reported cost growth between fiscal year 2007 and the second quarter of fiscal year 2008 (see table 3). Other programs experienced schedule delays and underperformance. For example, CBP’s Automated Commercial Environment program reported a 20 percent performance shortfall in the first quarter of fiscal year 2008. Moreover, we reported in July 2008 that the Coast Guard’s Rescue 21 program changed its acquisition baseline or cost, schedule, and performance goals four times resulting in a total 182 percent cost growth and 5-year schedule slip. DHS has acknowledged that the IRB and JRC have not conducted oversight in accordance with DHS policy—largely because the process has depended on direct involvement and availability of high-level leadership as well as a lack of sufficient staff resources to organize the review meetings. According to DHS representatives, the Deputy Secretary was unavailable to commit to the time required to conduct reviews of all investments, so only some major investments were reviewed. Our prior work shows that this problem existed from the start. For example, in 2004, we reported that DHS was having difficulty bringing all of its information technology programs before the IRB in a timely manner. We reported in 2005 that key stakeholders, such as the Chief Procurement Officer, did not receive materials in time to conduct a thorough review and provide meaningful feedback prior to investment review meetings and recommended that DHS ensure that stakeholders, including CPO officials, have adequate time to review investment submissions and provide formal input to decision- making review boards. Moreover, in 2007, we reported that DHS investment boards did not conduct regular investment reviews and control activities were not performed consistently across projects. DHS Chief Procurement Office and Chief Financial Office representatives added that the process was not adequately staffed to conduct annual reviews of investments as required by the investment review policy. We have previously recommended that DHS provide adequate resources, including people, funding, and tools, for oversight of major investments. A 2007 DHS assessment of 37 major investments found that many investments are awaiting senior management review. For example, FEMA’s major investment, the flood map modernization program, requested a key investment review decision meeting in 2004 that was subsequently scheduled and cancelled in 2006. As a result, the program proceeded from development to operations and support without IRB review or approval. Because of these limitations, alternative approaches to obtaining decisions were adopted. Numerous officials reported that rather than going through the formal investment review process, in some cases DHS component officials began to seek approval directly from the Deputy Secretary. For example, in November 2006, the DHS Inspector General reported on the CBP’s Secure Border Initiative program, noting that the investment oversight processes were sidelined in the urgent pursuit of SBInet’s aggressive schedule and that the IRB and JRC processes were bypassed and key decisions about the scope of the program and the acquisition strategy were made without rigorous review and analysis or transparency. DHS officials indicated that some decisions were very informal, based on conversations with the Deputy Secretary and without input from other IRB members. In such cases, the investment review process was bypassed, including consideration of supporting reviews and recommendations. DHS CPO and CFO representatives said they did not always know whether a decision had been made through this informal process. DHS investment review policy requires programs to develop specific documentation that captures key knowledge needed to make informed investment decisions. This approach is similar to DOD’s, which requires adequate knowledge at critical milestones to reduce the risk associated with each phase of the investment’s life cycle and enable program managers to deliver timely, affordable, quality products. GAO’s work on commercial best practices for major acquisitions has demonstrated that this approach, if effectively implemented, can significantly improve program outcomes. Our prior work has found that inadequate attention to developing requirements results in requirements instability, which can ultimately cause cost escalation, schedule delays, and fewer end items. Many major DHS investments do not have basic acquisition information required by investment review policy to guide and measure the performance of program activities and the investment review process. In particular, mission needs statements, operational requirements documents, and acquisition program baselines establish capability gaps, requirements needed to address gaps, and cost, schedule, and performance parameters, respectively. As of March 2008, of the 57 level 1 and 2 investments, 34 were in a phase that required all three documents, but 27 did not have or only provided an unapproved draft of one or more of these documents (see appendix III for the investments lacking these approved documents). Of the 27 investments, we found that over a third reported cost, schedule, or performance breaches between fiscal year 2007 and second quarter fiscal year 2008. For example, the Infrastructure Transformation program, which did not have an approved operational requirements document or acquisition program baseline, reported being up to 19 percent behind schedule in 2007. In another instance, the Immigration and Customs Enforcement (ICE) Detention and removal modernization program, which also lacked an approved operational requirements document and acquisition program baseline, reported schedule slippage of about 20 percent. Without required development and review of key acquisition data, DHS cannot be sure that programs have mitigated risks to better ensure good outcomes. CPO representatives explained that department acquisition management practices are new to many DHS components. For most investments, CPO representatives said that program managers were not familiar with basic acquisition documents and investment oversight staff had to work with program managers to help them develop these documents prior to investment reviews. In addition, we found that in fiscal year 2007, 24 major investments did not have program managers certified by DHS as having the required knowledge and skills to oversee complex acquisition programs. Moreover, other factors such as pressure to get programs up and running, additional external requirements, and technological challenges also impact the ability to successfully manage acquisitions to support good acquisition outcomes. At the same time, some component officials said that they received insufficient and inconsistent guidance regarding what information should be included in key acquisition documents. This issue is long-standing. For example, we reported in 2005 that because of the small number of department oversight staff, only limited support was provided to programs to assist them in completing their submissions for oversight reviews. In addition, component officials told us that key acquisition documents are sometimes approved at the component level but are not reviewed and approved at the department level. For example, TSA officials indicated that documents needed for the Secure Flight and Passenger Screening Programs were approved by TSA and submitted to DHS for approval, but no action was taken to review and approve them. The investment reviews that have been conducted have not always provided the discipline needed to help ensure programs achieve cost, schedule, and performance goals—even when a review identified important deficiencies in an acquisition decision memorandum. DHS has not routinely followed up on whether specific actions required by acquisition decision memorandums to mitigate potential risks have been implemented. The IRB issued a 2004 acquisition decision memorandum approving the U.S. Visitor and Immigrant Status Indicator Technology (US-VISIT)—a program that aims to facilitate travel and trade—to move into the capability development and demonstration phase although the IRB found the investment’s cost, schedule, and performance risk to be high. The memorandum stated that more clarity was needed on the program’s end- state capability, benefits related to life-cycle costs, and how it planned to transition to the operations and support phase. We reported that in 2006 DHS had yet to develop a comprehensive plan describing what the end- state capability would be, and how, when, and at what cost it would be delivered. In a 2006 decision memorandum, the IRB again instructed US- VISIT to address the end-state capability, by requiring a comprehensive affordable exit plan for airports, seaports, and landports. We subsequently reported that, as of October 2007, US-VISIT had yet to establish critical investment management processes, such as effective project planning, requirements management, and financial management, which are required to ensure that program capabilities and expected mission outcomes are delivered on time and within budget. In addition, DHS had not developed capability for the other half of US-VISIT, even though it had allocated about one-quarter of a billion dollars to this effort. In a May 2006 decision memorandum, the IRB directed the Cargo Advanced Automated Radiography System investment to develop within 6 months an acquisition program baseline, a concept of operations, and an operational requirements document. It also called for the investment to be reviewed annually. As of the second quarter of fiscal year 2008, a baseline and the concept of operations had been drafted, according to program officials. However, an operational requirements document had not been developed even though a $1.3 billion contract had been awarded for the investment. In addition, the Cargo Advanced Automated Radiography System investment had not yet received a follow-on review by the IRB. In another example, in a December 2006 decision memorandum, the IRB directed ICE’s major investment Automation and Modernization to update its acquisition program baseline, its cost-benefit analysis and its life-cycle cost analysis. Automation and Modernization has since updated its acquisition program baseline, but its cost analyses were last updated in 2005. Current and former CPO and CFO representatives noted that staffing has not been sufficient to review investments in a timely manner and conduct follow-up to ensure decisions are implemented. They indicated that support was needed to undertake a number of functions, including: designing the investment review process, collecting and reviewing investment documentation, preparing analyses to support investment decisions, and organizing review meetings, as well as conducting follow-up for major investments. According to DHS representatives, from 2004 to 2007 there were four full-time equivalent DHS employees plus support from four contractors to fulfill those responsibilities. Many acquisition decision memos provided specific deadlines for components to complete action items, but according to CPO and CFO representatives IRB action items were not tracked. Without follow-up, the IRB did not hold components and major investment program offices accountable for addressing oversight concerns. DHS’s investment review process requires that component heads establish processes and provide the requisite resources to manage approved investments adequately. Component heads are also responsible for approving all level 3 and level 4 investments and ensuring they comply with DHS investment review submission requirements. In the absence of sufficient review at the department level, well-designed component-level processes are particularly critical to ensuring that investments receive some level of oversight. For example, CBP and TSA officials reported that they relied on their component investment review processes to ensure some level of oversight when the department did not review their investments. However, for the nine components we reviewed, two did not have a process in place and others had processes that were either in development or not focused on the entire investment life cycle. For example, the Domestic Nuclear Detection Office and the National Protection and Programs Directorate did not have a formal investment review process, meaning that in the absence of an IRB or JRC review, their eight major investments received no formal review. While FEMA has a process to manage contract-related issues, its review process does not currently address the entire investment life cycle. According to CPO representatives, the department is working with components to ensure that components have a process in place to manage investments and to have them designate an acquisition officer who is accountable for major investments at the component level. DHS has acknowledged that the investment review process has not been fully implemented. In fact, the process has been under revision since 2005. DHS has begun to make improvements to the planning, execution, and performance of major investments as initial steps to clarify and better adhere to the investment review process. To gain an understanding and awareness of DHS’s major investments, in 2007 during the course of our review, the Undersecretary for Management undertook an assessment of 37 major investments conducted under the CPO’s direction. The assessment identified a range of systemic weaknesses in the implementation of its investment review process and in the process itself. The DHS assessment found many level 1 investments await leadership decisions; acquisition decision memos lack rigor; a lack of follow-up and enforcement of oversight decisions; inadequate technical support at the investment level; and unclear accountability for acquisitions at the component level. Many of the deficiencies identified are consistent with our findings. For example, the DHS assessment of Citizenship and Immigration Services (CIS) found that investments were either missing, or using draft or unsigned versions of key investment management documents, limiting DHS’s ability to measure the investments’ performance. In one case, DHS found that the Verification Information Systems investment is poorly defined. In another case, DHS reported that CIS’s investment Transformation was using draft and unsigned acquisition documents, including its mission needs statement, acquisition plan, and acquisition program baseline. In 2007, we reported that: CIS had not finalized its acquisition strategy for Transformation and cost estimates therefore remain uncertain, plans do not sufficiently discuss enterprise architecture alignment and expected project performance, and these gaps create risks that could undermine Transformation’s success as it begins to implement its plans. In addition, DHS found that CIS’s investment Customer Service Web Portal did not have key investment management documents and that the investment’s performance cannot be adequately assessed. Similarly, DHS found that CIS’s investment Integrated Document Production did not have performance measures or documentation that performance metrics have been implemented to measure program cost, schedule, and performance execution. To address the findings of its 2007 review, DHS is taking steps to reiterate the DHS investment review policy and establish a more disciplined and comprehensive investment review process. Beginning in February 2008, interim policies were issued by the Undersecretary for Management to improve management of major investments pending release of a new investment review management directive. Specifically, the Undersecretary for Management issued a memorandum in February 2008 on initiating efforts to improve the quality of acquisition program baselines for level 1 investments, and another in July 2008 on improving life-cycle cost estimating for major investments. To help address the backlog of investments awaiting review, the CPO has begun to review and issue acquisition decision memorandums for each level 1 program. As of August 2008, acquisition decision memorandums had been completed for three programs. The memorandums indicate documentation that must be completed, issues that must be addressed, and related completion dates before investment approval is given. The memorandums also identify any limits or restrictions on the program until those actions are completed. Further, the Undersecretary for Management signed an interim acquisition management directive in November 2008 to improve acquisition management and oversight pending results from a formal DHS executive review. DHS’s annual budget process for funding major investments has not been appropriately informed by the investment review process—largely because the IRB seldom conducts oversight reviews and when it has, the two processes have not been aligned to better ensure funding decisions fulfill mission needs. While DHS’s investment review framework integrates the two processes—an approach similarly prescribed by GAO and OMB capital planning principles—many major investments received funding without determining that mission needs and requirements were justified. In addition, two-thirds of DHS major investments did not have required life-cycle cost estimates, which are essential to making informed budget and capital planning decisions. At the same time, DHS has not conducted regular reviews of its investment portfolios—broad categories of investments—to ensure effective performance and minimize unintended duplication of effort for proposed and ongoing investments. In July 2008, more than one-quarter of DHS’s major investments were designated by OMB as poorly planned and by DHS as poorly performing. The DHS Undersecretary for Management has said that strengthening the links between investment review and budget decisions is a top priority. OMB and GAO capital planning principles underscore the importance of a disciplined decision making and requirements process as the basis to ensure that investments succeed with minimal risk and lowest life-cycle cost. This process should provide agency management with accurate information on acquisition and life-cycle costs, schedules, and performance of current and proposed capital assets. The OMB Capital Programming Guide also stresses the need for agencies to develop processes for making investment decisions that deliver the right amount of funds to the right projects. In addition, OMB and GAO guidance provide that an investment review policy should seek to use long-range planning and a disciplined, integrated budget process for portfolio management to achieve performance goals at the lowest life-cycle cost and least risk to the taxpayer and the government. Investment portfolios are integrated, agencywide collections of investments that are assessed and managed collectively based on common criteria. Managing investments as portfolios is a conscious, continuous, and proactive approach to allocating limited resources among an organization’s competing initiatives in light of the relative benefits expected from these investments. Our prior work at DOD has shown that fragmented decision-making processes do not allow for a portfolio management approach to make investment decisions that benefit the organization as a whole. The absence of an integrated approach can contribute to duplication in programs and equipment that does not operate effectively together. GAO best practices work also emphasizes that (1) a comprehensive assessment of agency needs should be conducted, (2) current capabilities and assets should be identified to determine if and where a gap may lie between current and needed capabilities, and (3) a decision about how best to meet the identified gap should be evaluated. The approved mission needs statement must support the need for a project before the project can proceed to the acquisition phase. OMB guidance states that in creating capital plans, agencies should identify a performance gap between the existing portfolio of agency assets and the mission need that is not filled by the agency’s asset portfolio. Moreover, best practices indicate that investment resources should match valid requirements before approval of investments. The DHS investment review process calls for IRB decisions and program guidance regarding new investments to be reflected to the extent possible in the budget. The DHS budget process consists of overlapping planning, programming, budgeting, and execution phases that examine existing program funding and link funding to program performance to ensure funds are expended appropriately and produce the expected results and benefits (see fig. 3). Annually, components submit resource allocation proposals for major investments to the CFO for review in March and, in turn, resource allocation decisions are provided to components in July. According to CFO representatives, information from investment oversight reviews would be useful to inform investment annual resource allocation decisions. CFO representatives explained that the CFO sought to align resource allocation decisions with the IRB approvals in 2006, but this was not possible because of the erratic investment review meeting schedule. As a result, a number of CFO and CPO representatives confirmed that funding decisions for major investments have not been contingent upon the outcomes of the investment review process. One of the primary functions of the IRB is to review and approve level 1 investments for formal entry into the annual budget process. However, we found that 18 of DHS’s 57 major investments did not have an approved mission needs statement—a document that formally acknowledges that the need is justified and supported. Specifically, the statement summarizes the investment requirement, the mission or missions that the investment is intended to support, the authority under which the investment was begun, and the funding source for the investment. As such, approval of the mission needs statement is required at the earliest stages of an investment. Lacking information on which major investments have validated mission needs, the CFO has allocated funds for major investments for which a capability gap has not been established. We reported in 2007 that DHS risked selecting investments that would not meet mission needs in the most cost-effective manner. The 18 investments that lacked an approved mission needs statement accounted for more than half a billion dollars in estimated fiscal year 2008 appropriations (see table 4). In addition, two thirds of major investment budget decisions were reached without a life-cycle cost estimate. A life-cycle cost estimate provides an exhaustive and structured accounting of all resources and associated cost elements required to develop, produce, deploy, and sustain a particular program. Life-cycle costing enhances decision making, especially in early planning and concept formulation of acquisition and can support budgetary decisions, key decision points, milestone reviews, and investment decisions. GAO and OMB guidance emphasize that reliable cost estimates are important for program approval and continued receipt of annual funding. DHS policy similarly provides that life-cycle cost estimates are essential to an effective budget process and form the basis for annual budget decisions. However, 39 of the 57 level 1 and level 2 major DHS investments we reviewed did not have a life-cycle cost estimate. Moreover, DHS’s 2007 assessment of 37 major investments also found investments without life-cycle cost estimates and noted poor cost estimating as a systemic issue. Without such estimates, DHS major investments are at risk of experiencing cost overruns, missed deadlines, and performance shortfalls. Cost increases often mean that the government cannot fund as many programs as intended. To begin to address this issue, the DHS Undersecretary for Management issued a memo in July 2008 initiating an effort to review and improve the credibility of life-cycle cost estimates for all level 1 investments prior to formal milestone approval. The JRC is responsible for managing the department’s level 1 and level 2 major investment portfolios and making portfolio-related recommendations to the IRB. Managing investments as portfolios is a continuous and proactive approach to allocating finite resources among an organization’s competing initiatives in light of the relative benefits expected from these investments. Taking a portfolio perspective allows an agency to determine how its collective investments can optimally address its strategic goals and objectives. As part of this responsibility, the JRC is expected to identify crosscutting opportunities and overlapping or common requirements and determine how best to ensure that DHS uses its finite resources wisely in those areas. Specifically, the JRC reviews investments to identify duplicative mission capabilities and to assess redundancies. While a certain amount of redundancy can be beneficial, our prior work has found that unintended duplication indicates the potential for inefficiency and waste. The Enterprise Architecture Board supports the JRC by overseeing the department’s enterprise architecture and performing technical reviews of level 1 and level 2 IT investments. In 2007, we reported that DHS did not have an explicit methodology and criteria for determining program alignment to the architecture. We further reported that DHS policies and procedures for portfolio management had yet to be defined, and as a result, control of the department’s investment portfolios was ad hoc. When it met regularly, the JRC played a key role in identifying several examples of overlapping investments, including passenger screening programs. Specifically, in March 2006, the JRC identified programs that had potential overlaps, including TSA’s Secure Flight, TSA’s Registered Traveler, and CBP’s Consolidated Registered Traveler programs, yet the programs lacked coordination and were struggling with interoperability and information sharing. Because the JRC stopped meeting soon thereafter, DHS may have missed opportunities to follow up on these cases or identify further cases of potential overlap. In 2007, we reported that while TSA and CBP had begun coordinating efforts, they had yet to align their passenger prescreening programs to identify potential overlaps and minimize duplication. We recommended that DHS take additional steps and make key policy and technical decisions that were necessary to more fully coordinate these programs. TSA and CBP have since worked with DHS to develop a strategy to align regulatory policies and coordinate efforts to facilitate consistency across their programs. In another case, we reported that CIS’s Transformation investment has been conducted in an ad hoc and decentralized manner, and, in certain instances, is duplicative with other IT investments. DHS’s 2007 assessment of 37 major investments also identified potential overlap and duplication of effort between investments. Overall the review found that limited communication and coordination across components led to overlapping DHS programs. For example, DHS found that the CIS Verification Information System had potential duplication of requirements implementation with National Protection and Program Directorate’s U.S. Computer Emergency Readiness Team investment. In another instance, DHS found the CIS Integrated Document Production investment had an unclear relationship to other DHS credentialing investments. OMB requires all agencies including DHS to submit program justification documents for major investments to inform both quantitative decisions about budgetary resources consistent with the administration’s program priorities, and qualitative assessments about whether the agency’s programming processes are consistent with OMB policy and guidance. To help ensure that investments of public resources are justified and that public resources are wisely invested, OMB began using a Management Watch List in the President’s fiscal year 2004 budget request as a means to oversee the justification for and planning of agencies’ information technology investments. This list was derived based on a detailed review of each investment’s Capital Asset Plan and Business Case. In addition, OMB has established criteria for agencies to use in designating high-risk projects that require special attention from oversight authorities and the highest levels of agency management. These projects are not necessarily at risk of failure, but may be on the list because of one or more of the following four reasons: The agency has not consistently demonstrated the ability to manage complex projects. The project has exceptionally high development, operating, or maintenance costs, either in absolute terms or as a percentage of the agency’s total portfolio. The project is being undertaken to correct recognized deficiencies in the adequate performance of an essential mission program or function of the agency, a component of the agency, or another organization. Delay or failure of the project would introduce for the first time unacceptable or inadequate performance or failure of an essential mission function of the agency, a component of the agency, or another organization. According to DHS officials, without input from investment oversight reviews, a limited budget review of program justification documents prior to OMB submittal can be the only oversight provided for some DHS major investments. CFO representatives told us that in the absence of investment review decisions, they rely on the best available information provided by program managers in order to determine if funding requests are reasonable. As a result, major investment programs can proceed regardless of whether the investment has received the appropriate IRB review or has required acquisition documents. We reported that as of July 2008, 15 DHS major investments are on both the OMB Management Watch List and list of high-risk projects with shortfalls, meaning that they are both poorly planned and poorly performing. According to DHS officials, the funding, programming, and budget execution process is not integrated into the requirements and acquisition oversight process and the DHS Undersecretary for Management has said that strengthening these processes is a top priority. The challenges DHS faces in implementing its investment review process are long-standing and have generally resulted in investment decisions that are inconsistent with established policy and oversight. Concurrent with this lack of oversight are acquisition programs worth billions of dollars with cost, schedule, and performance deficiencies. Weaknesses in some component management practices compound the problem leaving investments with little to no scrutiny or review. While the department’s process has been under revision since 2005, DHS has begun new efforts to clarify and better adhere to the investment review process. Without validating mission needs, requirements, and program baselines including costs, as well as identifying duplicative efforts and monitoring progress, DHS cannot appropriately manage investments and inform the budget process. Until DHS aligns oversight of major investments with annual budget decisions, the department is at risk of failing to invest in programs that maximize resources to address capability gaps and ultimately help meet critical mission needs. We recommend that the Secretary of Homeland Security direct the Undersecretary for Management to take the following five actions to better ensure the investment review process is fully implemented and adhered to: Establish a mechanism to identify and track on a regular basis new and ongoing major investments and ensure compliance with actions called for by investment oversight boards. Reinstate the JRC or establish another departmental joint requirements oversight board to review and approve acquisition requirements and assess potential duplication of effort. Ensure investment decisions are transparent and documented as required. Ensure that budget decisions are informed by the results of investment reviews including IRB approved acquisition information and life cycle cost estimates. Identify and align sufficient management resources to implement oversight reviews in a timely manner throughout the investment life cycle. To improve investment management, we recommend that the Secretary of Homeland Security direct component heads to take the following two actions: Ensure that components have established processes to manage major investments consistent with departmental policies. Establish a mechanism to ensure major investments comply with established component and departmental investment review policy standards. We provided a draft of this report to DHS for review and comment. In written comments, the department generally concurred with our findings and recommendations, citing actions taken and efforts under way to improve the investment review process. The department’s comments are reprinted in appendix II. DHS components also provided technical comments which we incorporated as appropriate and where supporting documentation was provided. In addition, several DHS components and offices reported additional progress since the time of our review to ensure their major investments comply with departmental policies. DHS is taking important steps to strengthen investment management and oversight. After being under revision since 2005, DHS issued a new interim management directive on November 7, 2008, that outlines a revised acquisition and investment review process. DHS also cited two new offices within the Chief Procurement Office that were established to provide better acquisition management and oversight; recently completed program reviews; and plans to revise training, standards, and certification processes for program managers. While many of these efforts are noted in our report, investment management and oversight has been an ongoing challenge since the department was established, and continued progress and successful implementation of these recent efforts will require sustained leadership and management attention. DHS stated that the new interim acquisition management directive will address many of our recommendations; however, our work has found that DHS has not fully implemented similar steps in the past. For example, in response to our first recommendation, to establish a mechanism to identify and track on a regular basis new and ongoing major investments and ensure compliance with actions called for by investment review board decisions, DHS’s new interim directive requires major programs to participate in an acquisition reporting process. While DHS is in the process of implementing a Next Generation Periodic Reporting System, it is too soon to tell whether this system will be successfully implemented. DHS’s first-generation periodic reporting system was never fully implemented, making it difficult for the department to track and enforce investment decisions. In response to our second recommendation, to reinstate the JRC or establish another departmental joint requirements oversight board to review and approve acquisition requirements and assess potential duplication of effort, DHS stated it has already developed a new Strategic Requirements Review process to assess capability needs and gaps; completed pilots; and briefed senior leadership. According to DHS’s new interim acquisition management directive, the results of this process are to be validated by the JRC, which is still in the process of being established and for which no timeline was provided. Further, as we found in this report, when the JRC was previously established in 2004, it was never fully implemented due to a lack of senior management officials’ involvement. In response to our third recommendation, that DHS ensure investment decisions are transparent and documented as required, DHS stated that its new interim acquisition management directive already implements this by requiring acquisition documentation for each acquisition decision event and capturing decisions and actions in acquisition decision memorandums. DHS also reported that it has conducted eight Acquisition Review Board meetings with documented Acquisition Decision Memorandums. While this progress is notable, our work has found that since 2004, DHS’s investment review board has not been able to effectively carry out its oversight responsibilities and keep pace with investments requiring review due to a lack of senior officials’ involvement as well as limited monitoring and resources. It is too soon to tell whether DHS’s latest efforts will be sustained to ensure investments are consistently reviewed as needed. Regarding our fourth recommendation, that the department ensure budget decisions are informed by the results of investment reviews, the new interim management directive creates a link between the budget and requirements processes and describes interfaces with other investment processes. While this process is more clearly established in the new directive, its implementation will be evidenced in the documents produced during upcoming budget cycles. We found in this report that the previous investment review process also highlighted links to the budget and other investment processes, yet the results of oversight reviews did not consistently inform budget decisions. In response to our fifth recommendation, to identify and align sufficient management resources to implement oversight reviews in a timely manner throughout the investment life cycle, DHS stated that it has partially implemented the recommendation by establishing a senior executive–led Acquisition Program Management Division within the Office of the CPO and plans to increase staffing from its current level of 12 experienced acquisition and program management specialists to 58 by the end of fiscal year 2010. Creating a new division to manage oversight reviews is a positive step; however, we have found that DHS has been challenged to provide sufficient resources to support its acquisition oversight function and the CPO’s office has had difficulty filling vacancies in the past. Regarding our two recommendations to improve investment management at the component level, DHS noted that the new interim management directive requires components to align their internal policies and procedures by the end of the third quarter of fiscal year (June) 2009. In addition, DHS plans to issue another management directive which will instruct component heads to create component acquisition executives in their organizations to be responsible for the implementation of management and oversight of component acquisition processes. If fully implemented, these steps should help to ensure that components have established processes to manage major investments. DHS further noted that establishment of the Acquisition Program Management Division, the new interim acquisition management directive, reestablishment of the acquisition review process, and other steps work together to ensure major investments comply with established component and departmental investment review policy standards. To implement this recommendation, the new component acquisition executives will need to be in place and successfully implement and ensure compliance with the new processes. DHS will continue to face ongoing challenges to implementing an effective investment review process identified in this report and highlighted in the department’s Integrated Strategy for High Risk Management. For example, consistent with our findings, the strategy cites challenges to ensuring availability of leadership to conduct investment reviews; timely collection and assessment of program data; and sufficient staff to support the investment review process. Sustained leadership focus will be even more critical to implement changes and maintain progress on acquisition management challenges as the department undergoes its first executive branch transition in 2009. As agreed with your offices, unless you publicly announce the contents of this report, we plan no further distribution for 30 days from the report date. At that time, we will send copies of this report to interested congressional committees and the Secretary of Homeland Security. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have questions regarding this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Principal contributors to this report were Amelia Shachoy, Assistant Director; William Russell; Laura Holliday; Nicole Harkin; Patrick Peterson; Karen Sloan; Marie Ahearn; and Kenneth Patton. Our objectives were to: (1) evaluate DHS’s implementation of the investment review process, and (2) assess DHS’s integration of the investment review and budget processes to ensure major investments fulfill mission needs. To assess how the investment review process has been implemented, we reviewed the DHS Investment Review Process management directive and corresponding handbook to determine which major investments required DHS review. In doing so, we focused on determining such key factors as how frequently major investments required oversight reviews and what documents such as mission need statements and acquisition program baselines are required to be approved by DHS executive review boards. We included in our analyses 57 level 1 and level 2 investments that DHS identified for fiscal year 2008. We determined the level of oversight provided to 48 of these major investments—those that required department-level review from fiscal year 2004 through the second quarter of fiscal year 2008. We also interviewed representatives of the Chief Procurement Office (CPO), Chief Financial Office (CFO), and Chief Information Office as well as nine DHS components and offices that manage major investments. We then collected investment review and program documents for each major investment and compared the information to investment review policy requirements. We also reviewed acquisition decision memorandums from fiscal year 2004 through the second quarter of fiscal year 2008. Based on the decision memos and investment information, we determined how many investments had been reviewed in accordance with DHS policy from fiscal year 2004 through the second quarter of fiscal year 2008. We also reviewed prior GAO reports on DHS programs as well as commercial best practices for acquisition. We reviewed DHS documents such as interim policy memos and guidance and interviewed CPO staff regarding planned revisions to the investment review process. We also compared our findings with a 2007 DHS internal assessment of 37 major investments. In addition, we reviewed available DHS periodic reports on major investments as well as component operational status reports to identify instances of cost growth, schedule slips, and performance shortfalls for major investments and to determine the status of program manager certification in fiscal year 2007 through the second quarter of fiscal year 2008. This information is self-reported by DHS major program offices and all programs did not always provide complete information, and we did not independently verify information in these reports. To assess the integration of investment review and the budget process, we reviewed DHS management directives for the investment review and the planning, programming, budgeting, and execution process as well as corresponding guidance. We also interviewed representatives from the Chief Procurement Office and Chief Financial Office to discuss how the processes have been integrated since 2004. We used investment data and acquisition documents from each major investment program to determine which had required life-cycle cost estimates and other documents such as a validated mission need statements. We also reviewed fiscal year 2009 DHS budget justification submissions to OMB. We compared DHS budget practices with GAO and Office of Management and Budget (OMB) guidance on capital programming principles. In addition, we reviewed relevant GAO reports. We conducted this performance audit from September 2007 until November 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Mission needs statements, operational requirements documents, and acquisition program baselines establish capability gaps, requirements needed to address those gaps, and cost, schedule, and performance parameters, respectively. Of the 57 DHS level 1 and 2 investments, 34 were in an acquisition phase that required all three documents; 27 either did not develop the document or only provided an unapproved draft of one or more of these documents (see table 5). Some major investment programs provided acquisition program baselines approved at the component level that were submitted but did not receive department review and approval. Appendix IV: Department of Homeland Security Investments Reviewed by GAO Provides fusion and visualization of information to create timely and accurate situational awareness reports for the Secretary of Homeland Security, the White House, and other users to detect, deter, and prevent terrorist activities Homeland Security Information System Facilitates information sharing and collaboration across DHS and its partners; enables real- time sharing of threat information for tactical first-responder support; and supports decision making in a real time secure environment. Designed to support processing of applications and petitions, capture fees and provide funds control, provide case status and support, and record the results of the adjudication of each application and petition. Established to provide: naturalization processing, interface with associated databases, improved accuracy, and more timely and accurate information to the public. Provides resources for all web development and maintenance activities. Includes web content management, development of web based transactions with Citizenship and Immigration Services customers and staff, web site maintenance, and web site hosting. Provides integrated card production system printers’ hardware and software, operational contract support, and facilities required to print secure cards granting immigration privileges or benefits to applicants. A system to allow all new immigration benefits applications and petitions to be filed electronically through a Citizenship and Immigration Services Internet web-based portal. Citizenship and Immigration Services will have a more comprehensive view of the customer and any potentially fraudulent transactions; improved audit functionality and record management; better resource management; and increased sharing of information within DHS and with other agency partners such as Justice and State. Supports the Systematic Alien Verification for Entitlements Program by providing automated status-verification information to federal, state, and local benefit-granting and entitlement agencies, and the E-Verify program by allowing participating employers to verify their new employees are authorized to work in the United States. Aims to replace and modernize most of the Coast Guard’s fleet of offshore cutters, boats, aircraft, and command and control systems over 25 years. Supports incident response, contingency planning, violation reporting and processing, vessel casualty investigation and analysis, vessel documentation, user fee collection, analysis of mission performance, monitoring of program effectiveness. Will implement a nationwide system for tracking and exchanging information with identification system equipped vessels operating in or approaching U.S. waters to improve homeland security and enhance Coast Guard and DHS operational mission performance. Command, control and communication system that improves mission execution in coastal zones. Essential to meet Search and Rescue program goals. Results in improved response to distress calls and better coordination and interoperability with other government agencies and first responders. Intended to replace the aging 41-foot utility boats and other large non-standard boats with assets more capable of meeting all of the Coast Guard multi-mission operational requirements. A collection of systems or applications used to provide vessel logistics information management capacity to the Coast Guard. Customs and Border Protection (CBP) Automated Commercial Environment Web-based import and export system that consolidates seven systems into one portal. It will provide advanced technology and information to decide, before a shipment reaches U.S. borders, what cargo should be targeted, and what cargo should be expedited. Intranet-based enforcement and decision support tool that is the cornerstone for all CBP targeting efforts. CBP uses the system to improve the collection, use, analysis, and dissemination of information to target, identify, and prevent potential terrorists and terrorist weapons from entering the United States and identify other violations and violators of U.S. law. Will build additional facilities to meet the needs of CBP’s expansion of its Border Patrol agent staffing. The recent addition of more agents and technology into enforcement activities has exceeded existing facility capacity. Framework used by trusted traveler programs for registering enrollees and performing identification and validation using automated systems. Technologies support the interdiction of weapons of mass destruction and effect, contraband, and illegal aliens being smuggled across the United States border, while having a minimal impact on the flow of legitimate commerce. Aims to integrate technology and tactical infrastructure into a comprehensive border security suite. This system will improve agent ability to respond to illegal activity and help DHS manage, control, and secure the border. Phase I will deploy next-generation technology and integrated systems to scan maritime containers for radiation or other special nuclear material. Will help develop an integrated and coordinated air and marine force to detect, interdict and prevent acts of terrorism arising from unlawful movement of people, illegal drugs and other contraband toward or across the borders of the United States. The goal is to modernize and standardize the existing CBP air and marine fleets and will require a specific number of primary and secondary air and marine locations and additional personnel to meet growing needs. Consolidated business case between CBP and ICE that will modernize: subject record “watch list” processing, inspection support at ports of entry, as well as case management. Western Hemisphere Travel Initiative Will fulfill the regulatory requirement to develop and implement a system to verify that U.S. and non-U.S. citizens present an authorized travel document denoting identity and citizenship when entering the United States. Provides a state-of-the-art, flexible, secure through security certification and accreditation, classified, collateral, integrated, and centrally managed enterprise wide-area network. Includes the consolidated DHS IT infrastructure environments which support the cross- organizational missions of protecting the homeland from a myriad of threats. These IT infrastructure investments are critical to providing a foundation in which information can be disseminated and shared across all DHS components, including external customers and intelligence partners, in a secure, cost effective, and efficient manner. Aims to achieve compliant financial management services and optimize financial management operations across the diverse systems cobbled together in 2003 when DHS was created from 22 agencies and over 200,000 people. Aims to improve and consolidate DHS’s vast array of payroll and personnel systems. It will provide DHS with a common flexible suite of human resource business systems. Its systems will develop, procure, and deploy current and next generation passive cargo portal units at the nation’s borders. Will deliver an advanced imaging system that will automatically detect high density material, detecting shielding that could be used to hide special nuclear material and highly enriched uranium or weapons grade plutonium. The system aims to improve throughput rates providing more effective scanning of a higher portion of cargo at the nation’s ports of entry. An integrated system to collect, analyze and distribute status, alarms, alert, and spectral data from all radiation portal monitors and equipment deployed at the Federal, State, Local, Tribal and international levels. Federal Emergency Management Agency (FEMA) Consolidated Alert & Warning System Provides the president, governors, mayors, tribal leadership with the ability to speak to the American people in the event of a national emergency by providing an integrated, survivable, all-hazards public alert and warning system that leverages all available technologies and transmission paths. It will also provide "situation awareness" to the public and leadership at multiple levels of government in an emergency. Provides information exchange delivery mechanisms through a portal for disaster information, an information exchange backbone, and data interoperability standards. Established a technology-based, cost effective process for updating, validating, and distributing flood risk data and digitalized flood maps throughout the Nation. Provides inspection staff and logistics at a moment’s notice to any Presidentially declared disaster. The state of readiness is 24 hours a day, 7 days a week, 365 days a year. Provides FEMA, emergency support function partners, and state decision makers with visibility of disaster relief assets and shipments to help ensure that the right assets are delivered in the right quantities to the right locations at the right time. Immigration and Customs Enforcement (ICE) Aims to satisfy three fundamental requirements: 1) house a growing population of illegal aliens, 2) provide appropriate conditions of confinement and 3) maintain its facility infrastructure. These requirements must be met through a series of design and build actions that begin with establishing facility infrastructure, continue with establishing detention capacity and culminate in building secure housing facilities. IT modernization and automation initiative that serves as the principal ICE program to: enhance ICE’s technology foundation, maximize workforce productivity, secure the IT environment, and improve information sharing across ICE and DHS. Detention and Removal Modernization Will provide operations management and field personnel the technical tools necessary to apprehend, detain, and remove illegal aliens in a cost-effective manner. Web-based system that manages data on schools, program sponsors, foreign students, exchange visitors, and their dependents during their approved participation in the U.S. education system so that only legitimate visitors enter the US. Survivable network connecting DHS with sectors that restore the infrastructure: electricity, IT and communications; states' homeland security advisors; and sector-specific agencies and resources for each critical infrastructure sector. Collects, catalogs and maintains standardized and quantifiable, risk-related infrastructure information to enable the execution of national risk management and for prioritizing the data for use by DHS partners. Aims to provide specially designed telecommunications services to the national security and emergency preparedness user community during natural or man-made disasters when conventional communications services are ineffective. These telecommunication services are used to coordinate response and recovery efforts and, if needed, to assist with facilitating the reconstitution of the government. Combines the capabilities of four existing investments to form a fully integrated IT system that will help fulfill the organization’s mission to collect, analyze, and respond to cyber security threats and vulnerabilities pursuant to its mission and authorities. Program is to collect, maintain, and share information, including biometric identifiers, on foreign nationals to determine whether an individual (1) should be prohibited from entering the United States; (2) can receive, extend, change, or adjust immigration status; (3) has overstayed or otherwise violated the terms of admission; (4) should be apprehended or detained for law enforcement action; or (5) needs special protection/attention (e.g., refugees). The vision of the US-VISIT Program is to deploy end-to-end management of data on foreign nationals covering their interactions with U.S. immigration and border management officials before they enter, when they enter, while they are in the U. S., and when they exit. Information Technology investment with a mission of providing early detection and characterization of a biological attack on the United States. National Bio and Agro-Defense Facility Infrastructure investment to support the Science and Technology Chemical and Biological Division program, which provides the technologies and systems needed to anticipate, deter, detect, mitigate, and recover from possible biological attacks on this nation’s population, agriculture or infrastructure. The program operates laboratories and biological detection systems and conducts research. Infrastructure investment to support the Science and Technology Chemical and Biological Division program, a key component in implementing the Presidents National Strategy for Homeland Security by addressing the need for substantial research into relevant biological and medical sciences to better detect, and mitigate the consequences of biological attacks and to conduct risk assessments. The program operates laboratories and biological detection systems and conducts research. Transportation Security Administration (TSA) Implements a national checked-baggage screening system to protect against criminal and terrorist threats, while minimizing transportation industry and traveling public burdens. An airborne communication system of systems (air-to-ground, ground-to-air, air-to-air and intra-cabin) that will give Air Marshall and other Law Enforcement Officers access to wireless communications and the ability to share information while in flight. System to manage the schedules of federal air marshals given the flights available (~25,000 per day) and the complexities of last minute changes due to flight cancellations. Hazmat Threat Assessment Program Leverages existing intelligence data to perform threat assessments on commercial truck drivers who transport hazardous materials to determine threat status to transportation security. Provides the resources required to deploy and maintain passenger screening and carry-on baggage screening equipment and processes at approximately 451 airports nationwide in order to minimize the risk of injury or death of people or damage of property due to hostile acts of terrorism. Will strengthen the security of the nation’s transportation systems by creating, implementing, and operating a threat-based watch list matching capability for approximately 250 million domestic air carrier passengers per year. Will improve security by establishing a system-wide common secure biometric credential, used by all transportation modes, for personnel requiring unescorted physical and/or logical access to secure areas of the trans system. Provides common environment for hosting applications; integrated data infrastructure; content; and a collection of shared services. | In fiscal year 2007, the Department of Homeland Security (DHS) obligated about $12 billion for acquisitions to support homeland security missions. DHS's major investments include Coast Guard ships and aircraft; border surveillance and screening equipment; nuclear detection equipment; and systems to track finances and human resources. In part to provide insight into the cost, schedule, and performance of these acquisitions, DHS established an investment review process in 2003. However, concerns have been raised about how well the process has been implemented--particularly for large investments. GAO was asked to (1) evaluate DHS's implementation of the investment review process, and (2) assess DHS's integration of the investment review and budget processes to ensure major investments fulfill mission needs. GAO reviewed relevant documents, including those for 57 DHS major investments (investments with a value of at least $50 million)--48 of which required department-level review through the second quarter of fiscal year 2008; and interviewed DHS headquarters and component officials. While DHS's investment review process calls for executive decision making at key points in an investment's life cycle--including program authorization--the process has not provided the oversight needed to identify and address cost, schedule, and performance problems in its major investments. Poor implementation of the process is evidenced by the number of investments that did not adhere to the department's investment review policy--of DHS's 48 major investments requiring milestone and annual reviews, 45 were not assessed in accordance with this policy. At least 14 of these investments have reported cost growth, schedule slips, or performance shortfalls. Poor implementation is largely the result of DHS's failure to ensure that its Investment Review Board (IRB) and Joint Requirements Council (JRC)--the department's major acquisition decision-making bodies--effectively carried out their oversight responsibilities and had the resources to do so. Regardless, when oversight boards met, DHS could not enforce IRB and JRC decisions because it did not track whether components took actions called for in these decisions. In addition, many major investments lacked basic acquisition documents necessary to inform the investment review process, such as program baselines, and two out of nine components--which manage a total of 8 major investments--do not have required component-level processes in place. DHS has begun several efforts to address these shortcomings, including issuing an interim directive, to improve the investment review process. The investment review framework also integrates the budget process; however, budget decisions have been made in the absence of required oversight reviews and, as a result, DHS cannot ensure that annual funding decisions for its major investments make the best use of resources and address mission needs. GAO found almost a third of DHS's major investments received funding without having validated mission needs and requirements--which confirm a need is justified--and two-thirds did not have required life- cycle cost estimates. At the same time, DHS has not conducted regular reviews of its investment portfolios--broad categories of investments that are linked by similar missions--to ensure effective performance and minimize unintended duplication of effort for investments. Without validated requirements, life-cycle cost estimates, and regular portfolio reviews, DHS cannot ensure that its investment decisions are appropriate and will ultimately address capability gaps. In July 2008, 15 of the 57 DHS major investments reviewed by GAO were designated by the Office of Management and Budget as poorly planned and by DHS as poorly performing. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Although the UN has undergone various cycles of reform since its creation in 1945, UN member states continue to have concerns about inefficient UN management operations. In September 2005, world leaders gathered at the UN World Summit in New York to discuss a variety of issues of importance to the UN, including management reforms. The outcome document from the summit called for the Secretary-General to submit proposals for implementing management reforms of the Secretariat. In October 2006, we reported that progress had been slow in five key UN management reform areas, with numerous reform proposals awaiting General Assembly review, and that many of the proposed or approved reforms lacked an implementation plan with time frames and cost estimates. Oversight is a key activity in governance that addresses whether organizations are carrying out their responsibilities and serves to detect and deter public corruption. Oversight functions include monitoring, evaluating, and reporting on the organization’s performance; auditing of the organization’s financial results and effectiveness of its internal controls; and holding senior management accountable for results. Oversight also includes investigation of allegations of fraud. The principal bodies responsible for conducting oversight in the three UN funds and programs we reviewed— The United Nations Development Program (UNDP), the United Nations Children’s Fund (UNICEF), and the World Food Program (WFP)—and the three specialized agencies we reviewed— the Food and Agriculture Organization (FAO), the International Labor Organization (ILO), and the World Health Organization (WHO)—include member states in their capacity as members of the governing bodies, internal auditors, investigators, and evaluation offices. The UN and other international organizations have become important sources of assistance to Burma’s impoverished people, as the country— one of the world’s poorest—has become increasingly isolated. This assistance includes programs aimed at mitigating the effects of prison conditions, forced labor, and conflicts in Burma’s ethnic areas. The UN is also attempting to provide food to vulnerable populations, promote local economic development, improve health conditions, and strengthen the Burmese educational system. In recent years, UN entities have increased their funding for activities aimed at addressing Burma’s problems. UN entities informed us that they spent about $218 million in Burma from 2002 through 2005. Nevertheless, Burma’s military regime distanced itself from the international organizations and began adopting increasingly restrictive policies after the regime underwent an internal purge in 2004, according to UN officials. The regime published guidelines in 2006 to restrict the activities of the international organizations. UN officials consider provisions in these guidelines, which have yet to be fully implemented, to be unacceptable. Since our October 2006 report, the progress of UN management reform efforts has varied in the five areas we reviewed—ethics, oversight, procurement, management operations of the Secretariat, and review of programs and activities (known as mandates). Various factors, such as member state disagreements on the priorities and importance of the remaining reform efforts, have slowed the pace of the UN’s efforts to improve the management of the Secretariat, and a number of reforms cannot move forward until these factors are addressed. Since our October 2006 report, the UN has taken steps to improve ethics. The ethics office has made substantial progress in increasing staffing and in enforcing a whistleblower protection policy. In addition, the UN has made some progress in developing ethics standards and in enforcing financial disclosure requirements. However, concerns have been raised that the success of the whistleblower protection policy is, in part, dependent on reforms in the UN internal justice system that are not projected to be completed until 2009. In addition, the policy is potentially limited by the ethics office’s lack of jurisdiction over UN funds and programs. After we issued our November 2007 report, the Secretary- General issued a bulletin calling for system-wide ethics standards for the Secretariat, programs, and funds. The bulletin outlined the guidelines and responsibilities for UN ethics offices of programs and funds and also stated that, if a program or fund does not have a policy in place for protection against retaliation, staff members of that program or fund may request protection from retaliation under the Secretariat’s policy. Although the UN has improved its oversight capability, the Office of Internal Oversight Services (OIOS) has not yet achieved financial and operational independence. In June 2007, member states created an Independent Audit Advisory Committee (IAAC) and, since then, the UN has made some progress in making it operational. The committee’s five members were elected in November 2007, and the committee is expected to be operational by January 2008. Since October 2006, some progress has been made in strengthening OIOS. Although OIOS has improved the capacity of individual divisions, including internal audit and investigations, UN funding arrangements continue to constrain its ability to audit high- risk areas, and member states have not yet agreed on whether to grant OIOS financial and operational independence. The UN has taken steps to improve its procurement process, but some reform issues have not moved forward since October 2006. Activities on which some progress has been made are the strengthening of procedures for UN procurement staff and suppliers, developing a comprehensive training program for procurement staff, and developing a risk management framework. However, the UN has made little or no progress in establishing an independent bid protest system and creating a lead agency concept, whereby specialist UN organizations would handle certain procurements in order to enhance division of labor, reduce duplication, and reduce costs. In addition, since our October 2006 report, the reorganization of the Department of Peacekeeping Operations, along with its related procurement activities, may affect the UN’s overall procurement reform efforts, such as establishing lines of accountability and delegation of authority for the Departments of Management and Peacekeeping Operations. Since our October 2006 report, the UN has improved some of the management operations of the Secretariat, but many reform proposals have not moved forward. Some progress has been made on selected issues involving human resources and information technology. In contrast, little or no progress has been made in reforming the UN’s internal justice system, budgetary and financial management functions, and the alternative delivery of certain services, such as internal printing and publishing. Despite some limited initial actions, the UN’s review of all UN mandates has not advanced, due in part to a lack of support by many member states. Although some progress was made in Phase I of the review, which ended in December 2006, little or no progress has been made in Phase II because member states continue to disagree on the nature and scope of the review and lack the capacity to carry it out. As a result, the prioritization of this particular UN management reform effort has decreased, according to UN and State officials. In September 2007, member states decided to continue reviewing mandates in the 62nd session of the General Assembly, but they did not determine how the review would proceed. Various factors have slowed the pace of UN management reform efforts, and some reforms cannot move forward until these factors are addressed. Key factors include the following: Member states disagree on UN management reform efforts. Delegates from 15 of 17 member states that we met with, representing Africa, Asia, Europe, Latin America, the Middle East, and North America, told us that the number one challenge to continued progress on management reform efforts is member state disagreements on the priorities and importance of the remaining reform efforts. Some management reform proposals lack comprehensive implementation plans, including time frames, completion dates, and cost and savings estimates for completing specific management reforms. In addition, the Secretariat has not submitted most of approximately 20 cost-benefit analyses and other assessments to the General Assembly as planned by March 2007. Administrative policies and procedures continue to complicate the process of implementing certain complex human resource initiatives. These policies and procedures include proposals to outsource certain administrative services, such as payroll processes, staff benefit administration, and information technology support. Competing UN priorities limit the capacity of General Assembly members to address management reform issues. For example, the reorganization of the Department of Peacekeeping Operations absorbed much of the General Assembly’s attention throughout the spring 2007 session and, as a result, progress on some issues was delayed while others were not taken into consideration by the General Assembly. To encourage UN member states to continue to pursue the reform agenda of the 2005 World Summit, we recommended in the report we issued on November 14, 2007, that, as management reforms are implemented over time, the Secretary of State and the U.S. Permanent Representative to the UN include in State’s annual U.S. Participation in the United Nations report an assessment of the effectiveness of the reforms. State generally endorsed our main findings and conclusions and noted that our assessment of UN progress on management reform efforts was accurate and balanced. State also agreed fully with the need to keep Congress informed of the effectiveness of management reforms, adding that the department will continue to monitor and inform Congress, as we recommended. State did not agree with our statement that successful whistleblower protections are dependent, in part, on the reform of the UN’s internal justice program. During our review, we found that UN and nongovernmental organization staff had concerns about weaknesses in the UN internal justice system and the potential impact of these weaknesses on the implementation of a successful whistleblower protection policy. We agree with these concerns. Although the six UN internal audit offices we reviewed have made progress in implementing international auditing standards, they have not fully implemented key components of the standards. In addition, while the six UN evaluation offices we reviewed are working toward implementing UN evaluation standards, they have not fully implemented them. Moreover, the governing bodies responsible for oversight of the six UN organizations we reviewed lack full access to internal audit reports and most lack direct information from the audit offices about the sufficiency of their resources and capacity to conduct their work. In addition, most UN organizations do not have an independent audit committee, as suggested by international best practices. Most of the six UN organizations we examined are in various stages of adopting ethics policies, such as requiring conflict of interest and financial disclosure statements and adopting whistleblower policies to protect those who reveal wrongdoing. Ethics policies could strengthen oversight by helping to ensure more accountability and transparency within the organizations. Some internal oversight units rely on their staff to comply with a general declaration that all UN employees sign when they are employed by the organization. We earlier reported that UNDP and WFP rely on their oversight staff to self-report any conflicts of interest, though WFP’s investigative unit was developing a conflict of interest policy to cover investigations staff in fall 2006, and none of the six organizations we examined require their internal oversight staff to disclose their financial interests, a practice that could help to ensure that employees are free from conflicts of interest. Five of the six of the organizations we studied have established whistleblower protection policies to protect those who reveal wrongdoing within their respective organizations. UNICEF, FAO, WFP, WHO, and ILO have whistleblower protection policies in place, and UNDP was developing such a policy. We reported that all six audit offices are developing and implementing risk-based work plans and five of the six internal audit offices have contributed to their respective organizations’ development of a risk management framework. However, the organizations’ senior management has not completed an organizationwide risk management framework that would assist in guiding the audit offices’ work plans. Moreover, only three of the six audit offices told us that they had sufficient resources to achieve their audit work plans, which could include high-risk areas. For example, WFP’s audit chief informed us that the audit office did not have sufficient resources to conduct its planned work for 2007 and as a result, it has had to defer audits to future years. We also reported that a number of internal oversight units do not have professional investigators and rely on other parties who may not be qualified, such as auditors, to determine whether wrongdoing has occurred. As a result of the limited capacity of organizations to conduct investigations, many internal oversight units have backlogs of investigative cases and are unable to complete their planned audits. A number of the organizations we examined indicated that they were working on increasing their investigative capacity in order to meet new organizationwide initiatives. For example, UNDP senior officials reported that they needed additional investigative staff because the number of cases had increased, due to the establishment of a fraud hotline. We reported that five of the six evaluation offices we reviewed stated that they lack sufficient resources and staff with expertise to manage and conduct evaluations—conditions that have impacted their ability to conduct high-quality and strategically important evaluations. For example, FAO’s evaluation officials informed us that because FAO does not have sufficient resources to manage and conduct evaluations to reasonably address management’s concerns, it relies heavily on the use of outside consultants for expertise. The governing bodies of the six organizations we examined lack full access to internal audit reports, which would increase transparency and their awareness of the adequacy and effectiveness of the organizations’ system of internal controls. Currently, member states are not provided with the internal audit office’s reports; however, member states including the United States have stated that access to audit reports would help them exercise their oversight responsibilities as members of the governing body. International best practices suggest that oversight could be strengthened by establishing an independent audit committee composed of members external to the management of the organization and reporting to the governing body on the effectiveness of the audit office and on the adequacy of its resources. However, the audit committees at four of the six UN organizations we examined are not in line with international best practices, and one of the entities does not have an audit committee. To improve oversight in UN organizations, we recommended that the Secretary of State direct the U.S. missions to work with member states to make internal audit reports available to the governing bodies to provide further insight into the operations of the UN organizations and identify critical systemic weaknesses; and establish independent audit committees that are accountable to their governing bodies, where such circumstances do not currently exist. While State, FAO, UNDP, WFP, and WHO generally agreed with our recommendations, ILO and UNICEF expressed concerns about implementing them. Specifically, ILO expressed reservations about making internal audit reports available to governing bodies, while UNICEF expressed concerns about establishing independent audit committees. We found that the military regime that rules Burma has blocked or significantly impeded UN and other international organizations’ efforts to address human rights concerns and to help people living in areas affected by ethnic conflict. The regime has also, to a lesser degree, impeded UN food, development, and health programs. Nonetheless, several UN and other international organization officials told us they are still able to achieve meaningful results in their efforts to mitigate some of Burma’s humanitarian, health, and development problems. Burma’s military regime has blocked international efforts to monitor prison conditions, and, until recently, forced labor in Burma. The regime halted ICRC’s prison visit program by insisting that pro-regime staff observe ICRC meetings with prisoners. Similarly, the regime frustrated ILO efforts to conclude an agreement establishing an independent complaints process for forced labor victims for 4 years. It eventually signed an agreement with ILO in February 2007 to establish a complaints mechanism for victims of forced labor. The regime has also impeded international efforts to address the needs of populations in conflict areas by restricting international access to those areas. For example, it has limited UNHCR efforts along the Thai border, while halting or impeding efforts in conflict areas by ICRC and other organizations. The regime has also impeded UN food, development, and health programs, although programs that address health and development issues in Burma have generally been less constrained by the regime’s restrictions than the ILO and ICRC human rights efforts. Delays in obtaining transport permits for food commodities from the current regime have hindered WFP efforts to deliver food to vulnerable populations. The regime’s time-consuming travel procedures have also impeded the ability of international staff to move freely within the country to ensure the timely provision of assistance. Officials of eight of the nine UN entities that provide humanitarian, health, and development assistance in Burma told us that the regime requires at least 3 to 4 weeks’ advance notice to authorize travel, which impedes the planning and monitoring of projects through field visits and reduces the scope of their activities. UN officials told us that the regime has also impeded their ability to address the needs of the Burmese population, conduct strategic planning, and implement programs in Burma by restricting their ability to conduct their own surveys and freely share the data they gather. Despite these restrictions, many of the international officials we spoke with told us that they are still able to achieve meaningful results in their efforts to mitigate some of Burma’s many problems. For example, UN officials working in the health sector told us that the Burmese regime had been increasingly cooperative in efforts to address HIV/AIDS prevalence and recently worked with several UN entities to develop a multisectoral plan that targets all victims of the disease in Burma. Several officials also emphasized that restrictions have had the least effect on organizations that tend to work closely with the regime. For example, an FAO official told us that FAO generally has good relations with the technical ministries it cooperates with due to its close work with these ministries in providing technical assistance and supporting knowledge transfer. Our report on Burma included no recommendations. We obtained comments on a draft of this report from the Secretary of State and cognizant UN and ICRC officials. State commented that the draft report was thorough, accurate, and balanced. While the UN Country Team commented that the UN and its partners had in the past decade achieved “a significant opening of humanitarian space on the ground,” it did not dispute our specific findings about the regime’s restrictions over the past 3 years. In response to recent protests in Burma, the UN Country Team noted the urgent necessity to address Burma’s deteriorating humanitarian situation and appealed for an improved operating environment for humanitarian organizations working there. The UN is increasingly called upon to undertake important and complex activities worldwide, including responding to conflict and humanitarian crises. As the UN’s role and budget expand, so do attendant concerns about weaknesses in accountability, transparency, and oversight. The UN Secretariat and UN-affiliated organizations face internal and external challenges in undertaking, administering, and overseeing their respective mission-related activities. UN organizations have worked to implement needed internal reforms to improve ethics, oversight, procurement, and management operations with varied degrees of progress. For example, the UN has worked to improve oversight by establishing an IAAC, but funding arrangements within the Secretariat’s internal audit office continue to constrain the office’s operational independence and its ability to audit high-risk areas. In addition, UN organizations face external challenges in operating environments such as Burma, where the military regime has blocked or impeded some UN activities aimed at improving human rights. Addressing these challenges will require concerted and sustained actions by member states and UN organizations’ management, staff, and oversight mechanisms. Mr. Chairman, this concludes my statement. I would be pleased to answer any questions that you or other Members of the Subcommittee may have at this time. Should you have any questions about this testimony, please contact Thomas Melito at (202) 512-9601 or [email protected]. Major contributors to this testimony were Cheryl Goodman, Zina Merritt, and Phillip Thomas (Assistant Directors); Debbie J. Chung; Lyric Clark; Andrea Miller; George Taylor; and Pierre Toureille. This testimony is based on three reports we issued during 2007—United Nations: Progress on Management Reform Efforts Has Varied (Nov. 2007); United Nations: Oversight and Accountability Could Be Strengthened by Further Instituting International Best Practices (June 2007); and International Organizations: Assistance Programs Constrained in Burma (Apr. 2007). The objectives, scope, and methodology of each of these reports follows. For this report, we assessed progress in ethics, oversight, procurement, management operations of the UN Secretariat, and mandate review. To assess the progress of specific UN management reform efforts within each of these five areas, we developed the following three categories: little or no progress, some progress, and substantial progress. However, we did not assign an overall level of progress to each of the five reform areas because the various initiatives within each area are highly diverse. During our review, we determined which category of progress to assign to each reform effort based on documents we collected and reviewed and discussions we had with State Department, UN, and other officials. After we had made our initial assessments of progress, three other GAO staff members not involved in this review used the evidence and the categories to make their own assessments independently of each other. These staff members then met with each other to reconcile any differences in their initial assessments. Finally, they met with us and confirmed that we were all in agreement on our assessments. To address our objectives, we reviewed documents proposing UN management reforms and interviewed officials from several UN departments in New York. We reviewed reports and bulletins published by the UN General Assembly and Secretariat, relevant UN resolutions, and related budget documents. The majority of the cost estimates for the proposed reform initiatives are preliminary, and detailed cost estimates are being developed; therefore, we did not analyze the assumptions underlying these estimates to determine whether they are reasonable and reliable. We met with officials from the General Assembly Office of the President, the Office of the Deputy Secretary-General, the Department of Management, and the Office of Internal Oversight Services (OIOS). We also met with representatives from 17 of 192 member states from various geographic regions to obtain a balance of views on the most critical challenges to reforming UN management. We discussed the status of UN management reforms with officials from the Department of State in Washington, D.C., and the United Nations in New York. We performed our work on UN management reforms from March to November 2007 in accordance with generally accepted government auditing standards. For this report, we selected 6 UN organizations from among the 10 funds and programs and 15 specialized agencies that comprise the universe of all UN funds and programs and specialized agencies, including the Food and Agriculture Organization, International Labor Organization, United Nations Children’s Fund, United Nations Development Program, World Food Program, and the World Health Organization. On the basis of their budgets for biennium 2004-2005, we selected the three largest funds and programs and three of the largest specialized agencies. Therefore, our results cannot be generalized to the full universe of all funds and programs and specialized agencies and may not represent the practices of the smaller UN organizations. To examine the extent to which the six organizations’ internal audit offices have implemented professional standards for performing audits, we reviewed relevant standards issued by the Institute of Internal Auditors. To conduct our review, we selected key audit standards that were based on previous GAO work. In addition, we examined documents and conducted interviews with various officials, including officials of the internal audit offices, finance division, human resources, audit committees, legal offices, and external auditors. Regarding investigations, the six UN organizations we examined have adopted the UN Uniform Guidelines for Investigations, which are intended to be used as guidance in the conduct of investigations in conjunction with each organization’s rules and regulations. To examine the extent to which the six organizations’ evaluation offices have implemented UN evaluation norms and standards, we reviewed the relevant standards and norms issued by the UN Evaluation Group. We examined documents from the six organizations, including reports prepared by the organizations’ evaluation offices and external peer reviewers, and annual reports of the evaluation offices. In addition, we conducted interviews with various officials of the evaluation offices. To examine the extent to which governing bodies are provided information about the results of UN oversight practices, we reviewed documents from the six organizations, including reports prepared by the organizations’ external auditors, the oversight unit chiefs, the governing bodies, and the audit committees, where applicable. We also examined the charters of the audit offices and the audit committees, where applicable. In addition, we interviewed selected representatives from UN member states, including representatives from the U.S. missions to the UN in Geneva, Rome, and New York and U.S. representatives to the governing bodies of the UN organizations we examined. In Geneva, we spoke with members of the Geneva Group, including representatives from the United Kingdom, Canada, the Netherlands, Australia, and Germany. In Rome, we spoke with additional members of the Geneva Group, including representatives from the United Kingdom, Spain, Canada, Sweden, South Korea, Germany, Switzerland, Finland, Italy, France, Russia, New Zealand, Japan, and the Netherlands. In addition, we met with representatives of the Group of 77 from Zimbabwe, Madagascar, Iraq, Dominican Republic, Bangladesh, Brazil, Cameroon, China, Egypt, Kuwait, Nicaragua, Peru, the Philippines, Sri Lanka, and Thailand. In New York, we spoke with mission representatives to the UN from Belgium, Australia, the United Kingdom, Canada, Japan, and Pakistan. Furthermore, to address our objectives, we spoke with senior officials from the Departments of State and Labor in Washington, D.C., and senior officials from State, Labor, Health and Human Services, and the U.S. Agency for International Development at the U.S. missions to the UN in Geneva, Rome, and New York. At these locations, we met with management and staff responsible for governance and oversight at FAO, ILO, UNDP, UNICEF, WFP, and WHO. We conducted our work on oversight and accountability of UN organizations from June 2006 through March 2007 in accordance with generally accepted government auditing standards. For this report, we examined documents relating to programs conducted in Burma by the UN Country Team (which includes ten UN entities located in that country) and the restrictions imposed on them by the Burmese regime. In New York and Washington, D.C., we met with officials of the U.S. Departments of State and the Treasury, the UN, the World Bank, and the International Monetary Fund. We also met with the Burmese UN mission in New York. In Rangoon, Burma, we met with officials of UN entities, the International Committee of the Red Cross, and several international nongovernmental organizations who asked that we not identify their organizations; and officials of the U.S. embassy and of the leading democratic organization in Burma. In and near Rangoon and Bassein, Burma, we met with recipients of UN assistance. We also traveled to Nay Pyi Taw (Burma’s newly built capital) to meet with officials from the Burmese Ministry of National Planning and Economic Development and the Ministry of Health. In Bangkok, Thailand, we met with officials from three additional UN entities that operate programs in Burma from Thailand, as well as with representatives of other donor nations. We conducted our work on Burma from May 2006 to February 2007 in accordance with generally accepted government auditing standards. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Longstanding problems in United Nations (UN) management underscore the pressing need to reform and modernize the United Nations in areas ranging from management, oversight, and accountability to operational activities in specific countries. The United States has strongly advocated the reform of UN management practices and has also been critical of the restrictions Burma's military regime has imposed on many international organizations in Burma over the past 3 years. This testimony, based on recent GAO reports, discusses (1) management reform efforts at the UN Secretariat since 2006; (2) oversight and accountability in selected UN organizations; and (3) UN and other international organizations' activities in Burma. GAO's report on UN management reform efforts notes that (1) progress has varied in the five areas GAO examined--ethics, oversight, procurement, management operations of the Secretariat, and review of programs and activities (mandates)--and (2) various factors, such as disagreements among member states, have slowed the pace of progress. The UN ethics office has taken steps to improve organizational ethics, including implementing a whistleblower protection policy, but GAO identified issues that may limit the impact of the policy. The UN has taken steps to improve oversight, including establishing an Independent Audit Advisory Committee. However, UN funding arrangements continue to constrain the independence of the Secretariat's internal audit office and its ability to audit high-risk areas. The UN has taken steps to improve certain procurement practices, but has not implemented an independent bid protest system or approved a lead agency concept, which could improve procurement services. The UN has taken steps to improve certain management operations of the Secretariat, but has made little or no progress in others. Despite some limited initial actions, the UN's review of mandates has not advanced, due in part to a lack of support by many member states. Finally, the pace of UN management reforms has been slowed by member states' disagreements on reform efforts, lack of comprehensive implementation plans, administrative issues that complicate certain internal processes, and competing UN priorities. GAO's report on oversight and accountability of selected UN organizations notes that, although the six UN internal audit offices GAO reviewed have made progress in implementing international auditing standards, they have not fully implemented key components of the standards. None of these six organizations require their internal oversight staff to disclose their financial interests. However, GAO found that five of the six organizations have made efforts to increase accountability by establishing whistleblower protection policies and one was developing such a policy. GAO also reported that while the six UN evaluation offices GAO reviewed are working toward implementation of UN evaluation standards, they have not fully implemented them. Finally, GAO reported that the governing bodies responsible for oversight of the six organizations lack full access to internal audit reports. GAO's report on Burma notes that Burma's military regime has blocked or significantly impeded UN and other international organizations' efforts to address human rights concerns and to help people living in areas affected by ethnic conflict. The regime frustrated international organizations' efforts to monitor forced labor for years before signing an agreement in early 2007; restricted their efforts to assist populations living in conflict areas; and blocked their efforts to monitor prison conditions and conflict situations. The regime has, to a lesser degree, impeded UN food, development, and health programs. However, several UN and other international organization officials told GAO they are still able to achieve meaningful results in their efforts to mitigate some of Burma's humanitarian, health, and development problems. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
NASA and its international partners—Japan, Canada, the European Space Agency (ESA), and Russia—are building the ISS as a permanently orbiting laboratory to conduct materials and life sciences research under nearly weightless conditions. Each partner is providing station hardware and crew members and is expected to share operating costs and use of the station. The NASA Space Station Program Manager is responsible for the cost, schedule, and technical performance of the total program. The Boeing Corporation, the station’s prime contractor, is responsible for ISS integration and assembly. As of June 30, 1997, the prime contractor reported that over 200,000 pounds of its station hardware was being built or had been completed. According to NASA, by the end of fiscal year 1998, hardware for the first six flights will be at Kennedy Space Center for launch processing. In our July 1996 report and subsequent testimony, we noted that the cost and schedule performance of the space station’s prime contractor had deteriorated and that the station’s near-term funding included only limited financial reserves. We also identified an emerging risk to the program: the indications of problems in the Russian government’s ability to meet its commitment to furnish a Service Module providing ISS power, control, and habitation capability. For several years, the space station program has been subject to a $2.1 billion annual funding limitation and a $17.4 billion overall funding limitation through the completion of assembly, which until recently had been scheduled for June 2002. According to NASA, these funding limitations, or caps, came out of the 1993 station redesign. Previous redesigns had been largely financially driven and the caps were intended to stabilize the design and ensure that it could be pursued. However, the caps are not legislatively mandated, although references to them in congressional proceedings and reports indicate that NASA was expected to build the space station within these limits. When the caps were first imposed, the program had about $3 billion in financial reserves. In our July 1996 report, we concluded that, if program costs continued to increase, threats to financial reserves worsened, and the Russian government failed to meet its commitment in a timely manner, NASA would either have to exceed its funding limitation or defer or rephase activities, which could delay the space station’s schedule and would likely increase its overall cost. In June 1997 testimony, we said that, if further cost and schedule problems materialized, a congressional review of the program would be needed to determine the future scope and cost level for a station program that merits continued U.S. government support. Over the past several months, NASA has acknowledged that the potential for cost growth in the program has increased. As a partner, Russia committed to making a variety of contributions to the ISS. These contributions include (1) the Service Module to provide crew habitation during assembly; (2) the Science Power Platform to help maintain the station’s orientation; (3) launch services to reboost and resupply the station, including the provision of propellant; and (4) Soyuz spacecraft to provide crew return capability during station assembly. In late 1995, NASA became concerned about Russia’s ability to provide steady and adequate funding for its commitments. According to the NASA Administrator and station program officials, the Russian government said repeatedly that the problem would be resolved, despite mounting evidence to the contrary. Finally, in the fall of 1996, Russia formally notified NASA that funding difficulties would delay the completion of the Service Module, which is a critical component for early assembly. Subsequently, NASA designed a three-step recovery plan. Step 1 focuses on adjusting the station schedule for an 8-month delay in the availability of the Service Module and developing temporary essential capabilities for the station in case the Service Module is further delayed by up to 1 year. Major activities in this phase include delaying the launch of station components that are to precede the Service Module into orbit and building an Interim Control Module to temporarily replace the Service Module’s propulsion capability. Step 1 is underway; the new or modified hardware being developed will be completed even if Russia maintains the Service Module’s revised schedule and delivers it on time. NASA officials told us that Russia has resumed its financial commitment, the Service Module assembly has restarted, and significant progress is being made. Step 2 is NASA’s contingency plan for dealing with any additional delays or the Russian government’s failure to eventually deliver the Service Module. This phase could result in permanently replacing the Service Module’s power, control, and habitation capabilities. NASA will decide later this fall on whether to begin step 2. Under step 3 of NASA’s plan, the United States and other international partners would have to pick up the remaining responsibilities the Russian government would have had, such as station resupply and reboost missions and crew rescue during assembly. A decision on step 3 is planned for sometime next year, at the earliest. In addition to their effects on space station development activities, these recovery plan steps place additional requirements on the space shuttle program. Under the plan, the space shuttle may be needed to launch and deliver the Interim Control Module and perform station resupply missions now expected to be done by Russia. Although the full impact of the recovery plan on the space shuttle program is not yet known, the plan has already resulted in the addition of two shuttle flights during the station’s assembly. The prime contractor’s cost and schedule performance on the space station, which showed signs of deterioration last year, has continued to decline virtually unabated. Since April 1996, the cost overrun has quadrupled, and the schedule slippage has increased by more than 50 percent. Figure 1 shows the cost and schedule variances from January 1995 to July 1997. Cost variances are the differences between actual costs to complete specific work and the amounts budgeted for that work. Schedule variances are the dollar values of the differences between the budgeted cost of work planned and work completed. Cost and schedule variances are not additive, but negative schedule variances can become cost variances, since additional work, in the form of overtime, is often required to get back on schedule. 1/95 4/95 7/95 10/95 1/96 4/96 7/96 10/96 1/97 4/97 7/97 -89 -123 -163 -223 -291 -35527-62-16-19-48 -88 -105 -107 -118 -129 -135-43-77-45-46-55 Between January 1995 and July 1997, the prime contract moved from a cost underrun of $27 million to a cost overrun of $355 million. During that same period, the schedule slippage increased from a value of $43 million to $135 million. So far, the prime contractor has not been able to stop or significantly reverse the continuing decline. In July 1996, independent estimates of the space station’s prime contract cost overrun at completion ranged from $240 million to $372 million. Since then, these estimates have steadily increased, and by July 1997 they ranged from $514 million to $610 million. According to program officials, some financial reserves will be used to help cover the currently projected overrun. Delays in releasing engineering drawings, late delivery of parts, rework, subcontractor problems, and mistakes have contributed to cost overruns. NASA’s concern about performance problems under the prime contract is evidenced by its recent incentive and award fee actions. In March 1997, NASA directed Boeing to begin adjusting its biweekly incentive fee accruals and billings based on a higher cost estimate at completion than Boeing was officially reporting. On the basis of an internal review, Boeing subsequently increased its estimate of cost overrun at completion from $278 million to $600 million. The increase in Boeing’s estimate potentially reduces its incentive award by about $48 million over the remainder of the contract period. Boeing was also eligible for an award fee of nearly $34 million for the 6-month period ending in March 1997. However, citing significant problems in program planning, cost estimating, and hardware manufacturing, NASA concluded that Boeing’s performance did not warrant an award fee. NASA also directed Boeing to deduct almost $10 million from its next bill to refund the provisional award fee already paid during the period. Boeing is implementing a corrective action plan for each identified weakness and has outlined a number of actions to improve the performance of the entire contractor team, including changing personnel, recruiting additional software engineers and managers, and committing funds to construct a software integration test facility. Boeing also presented a cost control strategy to NASA in July 1997. According to NASA officials, the strategy includes organizational streamlining and transferring some roles to NASA. Station officials assessed Boeing’s efforts to improve its performance as part of the midpoint review for the current evaluation period. They concluded that, while there was some improvement, it was insufficient to permit resumption of provisional award fee payments. When NASA redesigned the space station in 1993 and brought Russia into the program as a partner, the program had approximately $3 billion in financial reserves to cover development contingencies. Since then, the program reserves have been significantly depleted. In June 1997, the financial reserves available to the program were down to about $2.2 billion. NASA estimated that, by the end of fiscal year 1997, the remaining uncommitted reserves could be less than $1 billion. Financial reserves have been used to fund additional requirements, overruns, and other authorized changes. By June 1997, a station program analysis indicated that fiscal year 1997 reserves might not be sufficient to cover all known threats. More recently, station officials have estimated that a small reserve surplus is possible in fiscal year 1997, but concerns are growing regarding the adequacy of fiscal year 1998 reserves. NASA has already identified threats to financial reserves in future years that, if realized, would outstrip the remaining reserves. For example, program reserves have been identified to cover additional cost overruns; crew rescue vehicle acquisition; hardware costs, in the event that ongoing negotiations with partners are unsuccessful; and additional authorized technical changes. Thus, with up to 6 years remaining until on-orbit assembly of the station is completed, NASA has already identified actual and potential resource demands that exceed the station’s remaining financial reserves. Unless these demands lessen and are not replaced by other demands of equal or greater value, or NASA is able to find offsets and efficiencies of sufficient value to replenish the program’s reserves, the space station will require additional funding. NASA has been able to consistently report compliance with funding limitations and avoid exceeding its financial reserves, despite significant programmatic changes and impacts that have increased station costs. To enable it to do so, NASA has implemented or initiated a variety of actions, including those summarized below: The space station program is negotiating with ESA, Canada, and Brazil to provide station hardware. Under proposed offset arrangements, the ISS partners—ESA and Canada—would build hardware associated with the U.S. commitment in return for launch services or other considerations. Under a cooperative arrangement, Brazil would receive a small allocation of the station’s research capacity in return for any U.S. equipment it would agree to build. NASA estimates that $116 million in U.S. station development costs could be saved through these arrangements. Space station officials have scheduled a threat of $100 million against the program’s financial reserves in case the negotiations are unsuccessful. However, according to program officials, most of the negotiations are nearly completed. NASA dropped the centrifuge from the station budget and opened negotiations with the Japanese government to provide it. Also, the space station’s content at the assembly completion milestone was revised to exclude the centrifuge. This change enabled NASA to maintain the then-current June 2002 assembly completion milestone, even though the centrifuge and related equipment would not be put on the station until after that date. NASA transferred $462 million from its science funding to the space station development funding in fiscal years 1996 through 1998. NASA has scheduled the payback of $350 million—$112 million less than the amount borrowed—through fiscal year 2002. NASA is also planning to transfer another $70 million in fiscal year 1999. All of these funding transfers are within the $17.4 billion funding limitation through assembly completion. NASA transferred $200 million in fiscal year 1997 funding to the station program from other NASA programs to cover costs incurred due to Russian manufacturing delays. Congressional action is pending on the transfer of another $100 million in fiscal year 1998. These funds will be accounted for outside the portion of the program subject to the funding limitations. NASA uses actual and planned reductions in its fiscal year funding requirements to help restore and preserve its actual and prospective financial reserves. Typically, these actions involve rephasing or deferring activities to future fiscal years. For example, the agency’s current reserve posture includes actions such as moving $20 million in spares procurement from fiscal years 1997 to 1999 and $26 million in nonprime efforts from fiscal year 1997 to various future fiscal years. The cost impact of the schedule delay associated with step 1 of the Russian recovery plan is not yet fully understood. During congressional testimony in June 1997, the NASA Administrator stated that NASA was assessing the cost effects of a later assembly completion date. Any delay in completing the space station assembly would increase the program’s costs through the completion of assembly because some costs would continue to accumulate over a longer period. When NASA redesigned the station in 1993, it estimated that Russia’s inclusion as a partner would reduce program costs by $1.6 billion because the station’s assembly would be completed by June 2002—15 months earlier than previously scheduled.NASA has recently acknowledged that the completion of the station’s assembly will slip into 2003, but it has not yet scheduled the revised assembly completion milestone. If the scope and capability of the program under the June 2002 assembly completion milestone remain the same, the new milestone date will be set for the latter part of 2003. Consequently, most, if not all, of the reduced costs claimed by accelerating the schedule would be lost. NASA estimated the additional hardware costs associated with step 1 of the Russian recovery plan at $250 million. When the estimate was made, the specific costs of many of the components of the plan were not known. For example, NASA’s initial estimate includes $100 million for the Interim Control Module, but NASA now estimates that the module will cost $113 million. The total of $300 million in additional funding for the space station program in fiscal years 1997 and 1998 includes financial reserves. The most recent cost estimate for the Interim Control Module already indicates threats to those reserves. NASA plans to use the extra time created by the schedule slip to perform integration testing of early assembly flight hardware at the Kennedy Space Center. As of June 1997, the cost of this testing had not been fully estimated. However, NASA is currently budgeting $15 million in reserves for the effort. If NASA initiates further steps in the recovery plan, new or refined cost estimates would be required. Step 2 provides for the development of a permanent propulsion/reboost capability and modifications to the U.S. Laboratory to provide habitation. According to the NASA Administrator, the effort under this step could be funded incrementally, thus limiting the up-front commitment. NASA’s initial cost estimate for step 2 is $750 million. Step 3 of the plan would result in the greatest overall cost impact on NASA because it assumes that Russia would no longer be a partner and that NASA, along with its remaining partners, would have to provide the services now expected from Russia. For its share of the mission resupply role, NASA would have to use the space shuttle or purchase those services from Russia or others. In addition, the United States would have to purchase Soyuz vehicles from Russia or accelerate the development of the six-person permanent crew return vehicle. NASA has not officially estimated the cost of step 3, but it clearly would be very expensive: the potential cost of shuttle launches or purchased launch services alone over the station’s 10-year operational life would be in the billions of dollars. NASA expects to have more refined cost estimates for the contingency plan later this year. Some of NASA’s actions to reinforce its financial reserves and keep the program within its funding limitations have involved redefining the portion of the program subject to the limitations. Such actions make the value of the current limitations as a funding control mechanism questionable. Therefore, we recommend that the NASA Administrator, with the concurrence of the Office of Management and Budget, direct the space station program to discontinue the use of the current funding limitations. More complete estimates of the cost and schedule impacts of ongoing and planned changes to the program will be available later this year. This information will help provide a more complete and current picture of the cost and schedule status of the program and clarify some of the major future cost risk it faces. After this information is available, the Congress may wish to consider reviewing the program. This review could focus on reaching agreement with the executive branch on the future scope and cost level for a station program that merits continued U.S. government support. In view of the expected availability of revised cost estimates, the first opportunity for such a review would be in conjunction with NASA’s fiscal year 1999 budget request. At the end of the review, if the Congress decides to continue the space station program, it may wish to consider, after consultation with NASA, reestablishing funding limitations that include firm criteria for measuring compliance. In commenting on a draft of this report, NASA said that the report was a good representation of the program’s performance and remaining major challenges, but NASA was concerned that the report did not provide sufficient detail for the reader to appreciate the progress the space station program has made or understand the factors that have influenced the decisions already made and those that will be made in the future. NASA agreed with our recommendation. NASA said that it had consistently taken the position that the flat funding cap, while a fiscal necessity, was inconsistent with a normal funding curve for a developmental program. NASA added that the flat funding profile resulted in the deferral of substantial reserves to later years, instead of being available in the program’s middle years. NASA said that the station’s financial reserves were not intended to cover the unanticipated costs of the Russian contingency activities, but rather were largely intended to protect against U.S. development uncertainty. In response to NASA’s comments, we added more information to the report, including information on the status of the program and the origin of the funding caps. However, the question of what the station’s financial reserves were largely intended to cover is not relevant to our assessment, which focused on whether the funding cap was an effective cost control mechanism. Moreover, the central theme of our report is that funding requirements have been rising and additional funds may be needed. We do not suggest what the source of those funds should be. To obtain information for this report, we interviewed officials in the ISS and space shuttle program offices at the Johnson Space Center, Houston, Texas, and NASA Headquarters, Washington, D.C. We also interviewed contractor and DCMC personnel in Huntsville, Alabama, and Houston. We reviewed pertinent documents, including the prime contract between NASA and Boeing, contractor performance measurement system reports, DCMC surveillance reports, program reviews, international partner agreements, independent assessment reports, and reports by NASA’s Office of Safety and Mission Assurance. We performed our work from January to July 1997 in accordance with generally accepted government auditing standards. We are sending copies of this report to the NASA Administrator; the Director, Office of Management and Budget; and appropriate congressional committees. We will also make copies available to other interested parties on request. Please contact me at (202) 512-4841 if you or your staff have any questions concerning this report. Major contributors to this report are Thomas Schulz, Frank Degnan, John Gilchrist, and Fred Felder. The following are GAO’s comments on the National Aeronautics and Space Administration’s (NASA) letter dated September 8, 1997. 1. We have modified the report based on NASA’s comments. 2. The purpose and use of financial reserves is not the relevant issue. Our focus was on whether or not funding caps could be effective cost control mechanisms under circumstances where program content subject to the controls can be flexibly defined. In the past, NASA claimed the benefits of Russian participation on the program’s cost and schedule, but now that Russian participation is having negative cost and schedule effects, NASA argues that the additional funding needed should be accounted for outside the portion of the program subject to the funding limitation. Doing so dilutes the cost control ability of a funding limitation. 3. NASA’s claimed cost savings from including Russia as a partner was based mainly on a 15-month acceleration of the station’s assembly completion milestone. Our purpose was to point out that the delay in the assembly completion date means that NASA will incur additional costs during the station’s developmental period. Only the amount remains to be determined. In this report, we do not evaluate any of the claimed benefits, including cost reductions, of Russian participation in the program as a partner. 4. NASA correctly points out that the negative schedule variance under the prime contract is growing at a much slower rate than the negative cost variance, as shown by the slope of the lines in figure 1. 5. Figure 1 in the report accurately reflects cost and schedule variance changes and is directly relevant to supporting our point that NASA could experience additional cost growth if the deteriorating trend was not reversed or at least slowed because the final actual cost growth could exceed expected cost growth. After we completed our fieldwork on this assignment, the prime contractor reported that its estimate of the cost overrun at completion had more than doubled, from $278 million to $600 million. 6. NASA correctly notes that the centrifuge was not included in the development program when it was initially capped at $17.4 billion. However, NASA subsequently budgeted the centrifuge within the program and scheduled it for launch before the June 2002 assembly completion milestone. The centrifuge was later removed from the budget and NASA began negotiations with the Japanese to provide it. At that time, it was rescheduled for launch after the June 2002 assembly completion date. The centrifuge example helps to illustrate the leeway NASA has to change the content of the station program within the current cap. Such leeway undermines the cap’s value as a cost control mechanism. 7. We were asked to identify those methods NASA had used to stay within its funding limitations, not to evaluate NASA’s use of “no-exchange-of-funds” or “negotiated offset” arrangements. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information on the International Space Station (ISS), which is being developed by the United States and others, focusing on: (1) Russia's performance problems and the National Aeronautics and Space Administration's (NASA) reaction to them, including the additional cost and cost risk assumed by NASA; (2) cost and schedule experience under the prime contract; and (3) the status of and outlook for the program's financial reserves. GAO also identified actions taken by NASA to keep the space station program's funding within certain limits through the completion of the station's assembly. GAO noted that: (1) in May 1997, NASA revised the space station assembly sequence and schedule to accommodate delays in the production and delivery of the Service Module; (2) this revision occurred after more than a year of speculation regarding Russia's ability to fund its space station manufacturing commitments; (3) to help mitigate the adverse effects of the Russians' performance problems and address the possibility that such problems would continue, NASA developed and began implementing step 1 of a three-step contingency plan; (4) NASA has budgeted an additional $300 million from other NASA activities for the space station program to cover the hardware cost under step 1; (5) NASA will also incur other costs under step 1 that have not yet been estimated; (6) significant additional cost growth could occur in the station program if NASA has to implement steps 2 and 3 of its contingency plan; (7) the cost and schedule performance of the station's prime contractor has continued to steadily worsen; (8) from April 1996 to July 1997, the contract's cost overrun quadrupled to $355 million, and the estimated cost to get the contract back on schedule increased by more than 50 percent to $135 million; (9) so far, NASA and prime contractor efforts have not stopped or significantly reversed the continuing deterioration; (10) the station program's financial reserves have also significantly deteriorated, principally because of program uncertainties and cost overruns; (11) the near-term reserve posture is in particular jeopardy, and the program may require additional funding over and above the remaining reserves before the completion of station assembly; (12) to date, NASA has taken a series of actions to keep the program from exceeding its funding limitations and financial reserves; (13) NASA is accounting for these actions in ways that enable it to report its continuing compliance with the funding limitations; (14) however, to show continuing compliance in some cases, NASA has had to redefine the portion of the program subject to the funding limitations; (15) thus, the value of the current limitations as a funding control mechanism is questionable; (16) since GAO's June 1997 testimony, further cost and schedule problems have materialized and NASA has acknowledged that the potential for cost growth in the program has increased; and (17) GAO believes the program has reached the point where the Congress may wish to review the entire program. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Both federally and state-chartered credit unions are exempt from federal income taxes. However, their exempt status arises from different provisions of federal law. Federal credit unions are specifically exempt from federal and state income taxes under a provision of the Federal Credit Union Act. State-chartered credit unions are exempt under a provision of the Internal Revenue Code that describes as exempt, “Credit unions without capital stock organized and operated for mutual purposes and without profit.” The code also imposes UBIT on state-chartered credit unions, but not on their federally chartered counterparts. The tax-exempt status of credit unions originally was predicated on the similarity of credit unions and mutual financial institutions. Section 11(a)(4) of the Revenue Act of 1916, the statutory forerunner of section 501(c)(14)(A), exempted from federal income tax “cooperative banks without capital stock organized and operated for mutual purposes and without profit.” The exemption of credit unions stems from an opinion of the Attorney General, 31 O.A.G. 176 (1917), holding that credit unions organized under the laws of Massachusetts were so similar to cooperative banks as to come within the scope of section 11(a)(4). IRS regulations subsequently applied this ruling to credit unions generally. While other institutions lost their exemption in the Revenue Act of 1951, Congress specifically retained the exemption for credit unions by removing cooperative banks, savings and loan societies, and building and loan associations from exemption and inserting credit unions in their place. The Senate Finance Committee report accompanying the Revenue Act of 1951 stated that the exemption of mutual savings banks was repealed to establish parity with other banking institutions because the savings banks had become functionally similar to those other institutions. According to the Senate report, tax-exempt status gave mutual savings banks the advantage of being able to finance growth out of untaxed retained earnings, while competing corporations (commercial banks) paid tax on income retained by the corporation. The report stated that the exemptions for savings and loan associations had been repealed on the same ground. The report did not state why the tax-exempt status of credit unions was preserved. Credit unions are an important, but relatively small segment of the financial industry. According to NCUA and Federal Deposit Insurance Corporation data, federally and state-chartered credit unions represented 7.5 percent of all deposits and shares insured by the federal government as of December 31, 2005. Additionally, credit unions typically are much smaller than banks and thrifts in terms of total assets. For example, NCUA data indicated that approximately 88 percent of federally chartered credit unions had $100 million or less in assets with 83 percent having assets less than $50 million as of September 30, 2005. According to NCUA, the average size of a federally chartered credit union was $73.2 million in total assets and the median asset size was $11 million. Since the passage of CUMAA in 1998 and subsequent NCUA rule changes, NCUA has approved community charters with increasingly larger geographic fields of membership—for example, covering entire cities or multiple counties. Since 2000, community-chartered credit unions have nearly tripled their membership and nearly quadrupled their assets. Most of the new community charters approved between 2000 and 2005 were charter conversions by multiple-bond credit unions rather than new credit unions. According to NCUA, community charters offer credit unions greater opportunity than single- and multiple-bond credit unions to diversify their membership base, thereby contributing to the institution’s economic viability and ability to serve all segments of the community, including those of modest means. CUMAA is the most recent statute affecting field of membership requirements of federally chartered credit unions. In 1998, the Supreme Court determined that NCUA had erroneously interpreted the Federal Credit Union Act to permit federally chartered credit unions to have multiple common bonds. In response, Congress passed a provision in CUMAA to specifically permit multiple-bond credit unions subject to a general limitation on the number of members sharing a particular common bond. Also in CUMAA, Congress amended the provision of the act permitting the federal community charter by changing the description of its field of membership from “groups within a well-defined neighborhood, community, or rural district” to “persons or organizations within a well- defined local community, neighborhood, or rural district.” Subsequent to the passage of CUMAA, NCUA revised its regulations to approve community charters consisting of larger geographic areas of coverage and potential members. For example, NCUA recently approved one credit union for a community charter covering the entirety of Los Angeles County. Thus, an estimated 9.6 million persons who live, worship, and go to school or work in the county and businesses and other legal entities within county boundaries qualify for membership in this credit union. We reported in 2003 that previous NCUA regulations required credit unions to document that residents of a proposed community area interacted or had common interests. Credit unions seeking to serve a single political jurisdiction (e.g., a city or a county) with more than 300,000 residents were required to submit more extensive paper work. However, NCUA revised its regulations in 2003, defining a local community as any city, county, or political equivalent in a single political jurisdiction, regardless of population size and eliminated the documentation requirements. As shown in table 1, the number of community-chartered federal credit unions doubled from 2000 through 2005, while the number of multiple-bond credit unions declined by about 22 percent. In spite of the recent decline, multiple-bond credit unions remain the largest group of federally chartered credit unions in number and in total membership and assets. However, community-chartered credit unions overtook multiple-bond credit unions as the largest of the three federal charter types in terms of average membership and average size in terms of assets beginning in 2003. To a large degree, the increase in number, membership, and assets of community charter credit unions can be attributed to charter conversions rather than to new credit union charter approvals. Between 2000 and 2005, NCUA approved 616 applications for federal community charters. Of these 616 approved federal community charters, 600 were conversions from single- or multiple-bond credit unions while only 16 were for new credit union charters. As shown in table 2, the vast majority of the conversions to community charters—549 or about 92 percent—involved multiple-bond credit unions. NCUA officials indicated that changes in chartering policy have been triggered by factors such as the continued viability of federal credit unions in a changing economic environment and financial industry developments. NCUA believes that community charter expansion allows federal credit unions to attract a more diverse membership base. According to the officials, this in turn can enhance a credit union’s economic viability, safety and soundness, as well as provide greater opportunities to serve members of modest means. For example, officials explained that single- and multiple-bond credit unions often tend to be organized around employer or occupationally based associations, which in turn creates greater economic risk exposure since the membership base is intertwined with the economic cycles of a particular employer or occupation. Additionally, NCUA officials noted that employer or occupational bonds result in a greater concentration of members with middle rather than lower incomes. Since community charters are organized around geographically based associations, credit unions would be able to provide individuals from a broad range of occupations and income levels in these communities with access to their products and services. However, community based credit unions would be vulnerable to regional downturns in the economy. NCUA has established and increased participation in two programs and policies that are specifically designed to make credit union services more available to individuals of low- and moderate-income. NCUA’s Low Income Credit Union (LICU) program is designed to assist credit unions that can demonstrate that a majority of their members have a median household income less than 80 percent of the national household income or make less than 80 percent of the average for all wage earners. NCUA also has made it easier for federal credit unions, regardless of location, to expand their fields of membership into underserved areas (areas experiencing economic difficulty). Although federal credit unions increasingly have participated in these efforts in recent years, lack of data on the income levels of credit union members has made it difficult to determine how effective these programs have been in providing services to individuals of modest means. But, the limited existing data on income levels of credit union customers suggest that credit unions continue to lag behind banks in the proportion of customers that are of low- and moderate-income. NCUA has undertaken a pilot effort to capture information on the income characteristics of credit union members, but the data will not allow NCUA to reach statistically valid conclusions by charter type. As we reported in 2003, it has been generally accepted that credit unions have a historical emphasis on serving people with “small” or “modest” means. Congressional findings contained in CUMAA linked the tax-exempt status of credit unions, in part, to their “specified mission of meeting the credit and savings needs of consumers, especially persons of modest means.” NCUA incorporated this emphasis into its current strategic plan, which gives its mission as “facilitating the availability of credit union services to all eligible consumers, especially those of modest means through a regulatory environment that fosters a safe and sound credit union system.” According to NCUA officials, the changes in chartering requirements should allow credit unions to serve a more diverse membership, including those of modest means. In addition to approving more community charters, NCUA has encouraged credit union activity in other areas in an attempt to make credit union services more available to low-income individuals and underserved areas. According to NCUA, its LICU program is designed to assist credit unions serving predominantly low-income members in obtaining technical and financial services. Credit unions that receive a low-income designation receive certain opportunities, such as the following: greater authority to accept deposits from nonmembers such as voluntary health and welfare organizations; access to low-interest loans, deposits, and technical assistance through participation in NCUA’s Community Development Revolving Loan Fund; ability to offer uninsured secondary capital accounts and include these accounts in the credit union’s net worth for the purposes of meeting its regulatory capital requirements; and a waiver of the aggregate loan limit for member business loans. From 2000 through 2005, the number of LICUs grew from 632 to 1,032, an increase of more than 63 percent (see fig. 1). Credit union expansion into underserved areas also has increased in recent years. From 1994 through 1998, NCUA rules permitted federal credit unions, regardless of charter type, to include residents in low-income communities and associations in their fields of membership. In 1998, CUMAA expressly recognized that multiple-bond credit unions would be authorized to serve persons or organizations within an area that was underserved. The Federal Credit Union Act defines an underserved area as a local community, neighborhood, or rural district that is an “investment area” as defined by the Community Development Banking and Financial Institutions Act of 1994—that is, experiencing poverty, low income, or unemployment. NCUA’s Chartering and Field of Membership Manual (Interpretive Ruling and Policy Statement 03-1 or IRPS 03-1) allowed credit unions to include underserved areas in their fields of membership, without regard to location or changes to their charter type. For example, NCUA recently approved a credit union in the state of Maryland to serve residents within an area of Washington, D.C., determined to be “underserved.” Between 2000 and 2005, the number of credit unions receiving NCUA approval to adopt underserved areas grew from 40 to 641. As shown in table 3, the largest proportion of the 641 credit unions approved through year-end 2005 were multiple-bond credit unions (410 or 64 percent), followed by community-chartered credit unions (196 or 31 percent). However, recent changes in NCUA policies may limit the growth of the underserved areas program. In connection with a lawsuit instituted in November 2005, NCUA stopped permitting single-bond and community federal credit unions to include underserved areas in their fields of membership. This had the effect of allowing access only for multiple-bond credit unions, which is permitted specifically in a provision of the Federal Credit Union Act. In the lawsuit, the American Banker’s Association (ABA) challenged NCUA’s approval of community-chartered credit unions adding underserved areas to their field of membership. ABA argued that NCUA misinterpreted the Federal Credit Union Act by allowing a community federal credit union to expand into several communities in Utah. ABA contended that the Federal Credit Union Act allows multiple- bond credit unions, but not community-chartered credit unions, to add underserved areas to their fields of membership. In response, NCUA subsequently amended its chartering regulations to limit the adoption of underserved areas to multiple-bond credit unions. NCUA’s final rule, incorporating these amendments, took effect on July 28, 2006. On July 20, 2006, ABA announced that it had agreed to dismiss its lawsuit. Despite the expansion into underserved areas and the LICU program, NCUA cannot specifically quantify the extent to which these programs have increased use of credit union services by individuals of modest means. As we reported in 2003 and will discuss in the following sections, limited data are available that specifically measure the income levels of credit union members and the services used by individuals of modest means. As a result, although NCUA data indicate increased adoption of underserved areas and increased participation in the LICU program, data do not exist to specifically show the extent to which these programs have increased services provided to individuals of modest means. Despite the shift toward community charters and the increase in the number of credit unions participating in NCUA’s low-income and underserved programs, our analysis of data from the Federal Reserve’s 2004 Survey of Consumer Finances (SCF) indicated that credit unions had a lower proportion of customers who were of low- and moderate-income than did banks. These results were similar to the results of our analysis of the Federal Reserve’s 2001 SCF data, which we discussed in our 2003 report. We combined the 2004 SCF data into two main groups—households that only and primarily used credit unions (credit union customers) and households that only and primarily used banks (bank customers). We then computed the proportions of credit union customers and bank customers in each of these four income categories—low, moderate, middle, and upper. We based our income categories on criteria that financial regulators use to assess compliance with the Community Reinvestment Act, which is intended to encourage depository institutions to help meet the credit needs of the communities that they serve. Specifically, (1) a low- income household had less than 50 percent of the national median household income; (2) a moderate-income household had an income of at least 50 percent, but less than 80 percent, of the national median household income; (3) a middle-income household had an income of at least 80 percent, but less than 120 percent, of the national median household income; and (4) an upper-income household had an income of at least 120 percent of the national median household income. We estimated that 14 percent of credit union customers were of low-income and 17 percent were of moderate-income, compared with 24 percent and 16 percent for banks. We found the difference between the proportion of low-income customers at banks and credit unions to be statistically significant (that is, the evidence suggested that the difference between the two was not simply the result of chance). Moreover, we estimated that 20 percent of credit union customers were of middle-income and 49 percent were of upper-income, compared with 18 percent and 41 percent for banks. We found the difference between the proportion of upper-income customers at banks and credit unions to be statistically significant as well. In an effort to assess the extent to which credit unions served people of “modest means,” we combined households with low- or moderate-incomes into one group (as a proxy for modest means) and combined households with middle or upper incomes into another group. We found that 31 percent of credit union customers were of “modest means,” compared with 41 percent of bank customers, suggesting that banks served a higher proportion of people of “modest means.” The difference between banks and credit unions was statistically significant. As shown in figure 2, the proportion of credit union customers that were in the upper-income category grew from 2001 to 2004. This increase, from 43 percent to 49 percent, was statistically significant. Thus, the statistically significant difference between banks and credit unions in serving people of “modest means” that we documented in our 2003 report using 2001 data appears to have persisted in the 2004 data. Moreover, we found the decline from 2001 to 2004 in the proportion of credit union customers in the “modest means” category to be statistically significant. Additionally, the relatively high percentage of households in the moderate- and middle- income categories that used credit unions (37 percent) in the 2004 SCF may be reflective of credit union membership traditionally being based on occupational- or employer-based fields of membership. However, NCUA officials told us that since growth in the agency’s programs to expand services to lower-income persons and undeserved areas are relatively recent, it was probably too soon to expect any changes in the SCF data, with respect to customer income. Further, NCUA felt that it would take time for any results to appear in the data, as credit unions seeking to expand into new areas and reaching new types of customers would face a learning curve in their efforts. Additionally, NCUA officials stated that since most of the conversions to the community charter occurred within the last 5 years, within a reasonable period they expected to see a change in the customers these credit unions were serving. It should also be kept in mind that the latest available data from SCF are 2-years old, so any more recent changes would not be reflected in our analysis. As we noted in our 2003 report, limitations in SCF data preclude its use in drawing definitive conclusions about the income characteristics of credit union members. Additional information—especially about the income levels of credit union members receiving consumer loans and other credit union services—would be required to assess more completely whom credit unions serve. As further noted in our 2003 report, NCUA has noted that credit union members were likely to have higher incomes than nonmembers because credit unions are occupationally based. As NCUA and others have noted—because of the statutory limitations on who can join federal credit unions—credit union membership is largely based on employment, and credit unions are restricted to the income composition of the individuals within fields of membership containing employed individuals. However, as we noted earlier, SCF provides the best data currently available regarding the income characteristics of credit union members. To determine how sensitive our results were to our income categorization, we used median family income in addition to median household income to analyze the 2001 and 2004 SCF data. We found similar results using both median family and household income. Recognizing the limitations of the SCF and other available data, our 2003 report suggested that Congress consider requiring NCUA to obtain data on the extent that credit unions provided loans and other services to low- and moderate-income households within each federally insured credit union’s field of membership. In response to your Committee’s concerns regarding the lack of available information to evaluate credit union member income and services, NUCA undertook a data collection effort to profile federal credit union member income information, identify the credit union services offered to credit union members, and provide information on the compensation of credit union executives. (We discuss executive compensation in more detail later in this report). As of August 31, 2006, NCUA had completed its data collection phase, as agreed with the Office of Management and Budget under the Paperwork Reduction Act, which is intended to minimize the paperwork burden for nonprofit institutions. NCUA took a random sample of 481 federal credit unions and relied on two different data collection methods to determine member incomes. NCUA officials told us they intended to compare the results of the two methods to determine the extent of any income differences and identify which of the two approaches might be relied upon in the future. One method involved obtaining information such as a credit union member’s zip code from NCUA’s Automated Integrated Regulatory Examination System to make projections of median household income. The other method involved using the street address and zip codes of credit union members and applying a software package that uses geo-coding to determine median family income averages. The officials told us that the software package is widely used in the banking industry to help make income determinations for fair lending examinations. NCUA also gathered information from the credit unions on the type of services the institutions offer to their members, including services that may be of value to members with lower incomes or little financial experience. Using the same sample of credit unions, NCUA collected information on whether or not certain services are provided by the credit union. For example, NCUA gathered information on the extent to which the sampled credit unions offer low-balance checking accounts and whether they offer some type of financial literacy training. NCUA officials stated to us that there are limitations of the data collection effort. First, although the information collected represents a statistically valid random sample of the federal credit union population and will provide information on the income levels of overall federal credit union members, the data will not enable NCUA to make statistically valid conclusions by charter type or make conclusions about the extent of credit union services being provided to various income levels. The officials explained that to do so would require a larger and more time-consuming data collection effort, requiring an increase in sample from the current effort of 481 to a sample of almost 1,200 credit unions. According to the officials, a larger sample would not allow them to meet their goal to report results by year-end 2006. NCUA indicated that despite these limitations, the data collected will add to the agency’s knowledge and should be valuable in deciding what actions, if any, might be appropriate over the longer term. At the time of our discussions, NCUA had not developed benchmarks to use as a measure for a “modest means” category related to member income data. NCUA indicated that its data collection effort will help the agency better understand the concept of “modest means” in relation to geographically dispersed, limited, and diverse fields of membership. NCUA’s data collection effort represents 61 percent of all credit unions because the regulator has oversight authority for federally chartered credit unions, while state governments have responsibility for overseeing the remaining 39 percent of the credit unions (state-chartered credit unions). Finally, while NCUA’s data collection effort will be useful for establishing a baseline, NCUA officials stated that there are no plans to gather the information on a continual or routine basis. In response to a March 2006 congressional request, National Association of State Credit Union Supervisors (NASCUS) officials told us they and state regulatory agencies have initiated a data collection effort for state- chartered credit unions. NASCUS will collect some information similar to that collected in the NCUA pilot, such as membership income and executive compensation. However, NASCUS also will collect data in two additional areas: credit union service organizations (CUSO) and UBIT. NASCUS is using a methodology similar to NCUA’s to determine member income levels. According to NASCUS, they have developed a representative sample by applying different weights to unique state credit union characteristics, including size, field of membership, and charter type. Credit unions selected in the representative sample will respond to a questionnaire developed by state regulatory agencies. The questionnaire addresses membership, CUSOs, UBIT and executive compensation. As of September 2006, the officials indicated that the data collection effort had started, and that they expected the results to be available in the first quarter of 2007. Our analysis showed that credit unions tended to offer better interest rates than similarly sized banks for a variety of products and loans, but rate data alone cannot be used to determine the extent to which the benefits of tax exemption have been passed to members. We obtained and analyzed rate data for various savings products offered by credit unions and banks from 2000 through 2005 and found that credit unions on average offered higher rates than comparably sized banks. Similarly, the rate data that we obtained for various loan products indicated that on average credit unions tended to offer lower interest rates than comparably sized banks, particularly for consumer loans such as automobile loans. However, it is important to note that interest rates during the period covered by our analysis were at historic lows. As seen in figure 3, rates offered by credit unions from 2000 through 2005 on regular savings accounts on average were higher than those offered by similarly sized banks. However, the differences among the rates of comparably sized credit unions and banks tended to get larger as the size of the institutions increased. For example, for institutions with assets of less than $100 million, the difference between credit unions and banks averaged about 0.15 of a percentage point in this period, while the difference for institutions with assets greater than $1 billion averaged almost 0.7 of a percentage point. More recently, the gap in rates between larger credit unions and banks closed considerably; in the greater than $1 billion asset range, the gap was more than 1 percentage point in 2000, but about one-half of 1 percent in 2005. We observed similar trends throughout the period for other savings products such as money market accounts and certificates of deposit (see app. III). The difference between credit unions and banks was more pronounced for consumer loans. For example, over the 6-year period, the rates that credit unions charged for 60-month new car loans tended to be lower than the rates charged by similarly sized banks, by 1 or 2 percentage points. As shown in figure 4, the trend was consistent for the larger asset category as well. However, unlike savings products, the rate differences between credit unions and banks for car loans widened in 2005. These trends held true for rates for other consumer products such as credit cards. Although credit unions charged lower interest rates for consumer loans, similarly sized credit unions and banks charged virtually identical rates on larger loans such as mortgages, from 2000 through 2005 (see fig. 5). In some limited instances, banks offered lower rates than similarly sized credit unions. Also, larger institutions in general offered lower rates than smaller institutions. Our analysis of deposit and loan rate data for credit unions and banks does not fully identify how the tax-exemption of credit unions might benefit credit union members. First, there may be other reasons for differences in rates beside tax differences. For example, loan rates may differ because of differences in borrower characteristics, such as creditworthiness, or because of geographic market differences. In addition, tax-exemption may benefit members in other ways than through loan and deposit rates. Credit unions might also charge lower fees than they otherwise would for services and products provided to members. We did not identify any comprehensive studies or data sources that addressed differences in fees charged by credit union and banks on a national basis. Unlike banks, credit unions can finance additional services and add to desired or required reserves through untaxed retained income. As a result of tax-exemption, credit unions may retain more income to add to reserves or to finance additional services than they would if they were taxed. As stated earlier, state-chartered credit unions are subject to tax on unrelated business income while federal credit unions specifically are exempt. IRS has several ongoing examinations of state-chartered credit unions to determine which of their activities are subject to UBIT. Credit union trade groups have stated the need for guidance regarding the activities that IRS has determined to be subject to UBIT; IRS has stated that it plans to issue technical advice in 2007 after completing its reviews. Furthermore, the practice of allowing group statewide filings has made it more difficult for the IRS to scrutinize the activities of individual institutions to ensure compliance with the UBIT statute. IRS officials asserted that the agency plans to issue technical advice on the application of UBIT to state credit union activities, which they stated should improve credit union compliance with the statute. UBIT is a tax on income derived by a tax-exempt entity from a trade or business that is regularly carried on and not substantially related to the exercise or performance of the purpose or function constituting the basis for the entity’s exemption. Under the Internal Revenue Code, state- chartered credit unions are subject to UBIT, but federal credit unions are not subject to the tax because they are exempt federal instrumentalities under a provision of the code. As shown in table 4, the amount of income subject to UBIT reported by state-chartered credit unions and the related taxes paid nearly doubled from 2000 through 2004 and totaled more than $5 million over this period. As credit unions have increased the types of products and services that they offer, certain product offerings by state-chartered credit unions have resulted in IRS examining whether state-chartered credit unions are using their tax-exempt status to conduct business unrelated to their exempt purposes. In November 2005, an IRS commissioner informed the Congress of an IRS review of certain activities of state-chartered credit unions for purposes of UBIT.The IRS has been reviewing activities that included the following: the sale of optional credit life insurance and credit disability insurance to members that would pay off the loan balances with the organization, if the borrower died or became disabled; the sale of “guaranteed auto protection” insurance, which pays the automobile loan balance in the event of loss or destruction of a vehicle to the extent it exceeds the value of the vehicle; the sale of automobile warranties; the sale of cancer insurance; the sale of accidental death and dismemberment insurance; ATM fees charged to nonmembers; the sale of health or dental insurance; the marketing of mutual funds to members; and the marketing of other insurance and financial products. According to IRS officials, the agency had 50 ongoing examinations of state-chartered credit unions for UBIT purposes as of September 2006. Determining the applicability of UBIT to state credit union activities is a complicated proposition because it depends on the relationship of the activities to credit unions’ tax-exempt purposes or functions. However, as IRS stated in a Technical Advice Memorandum, neither the Internal Revenue Code nor IRS regulations enumerate the functions of a credit union exempt under section 501(c)(14) of the code. The tax-exemption is based on what can be described as structural features, specifically the institution’s mutuality and nonprofit operations, whether it is organized and operated in accordance with state law, and whether its members share a common bond. Groups Representing state-chartered credit unions and the Credit Union National Association have stated that IRS has not provided sufficient guidance on which credit union activities are or are not subject to UBIT. According to IRS officials, IRS is planning to issue specific information in the form of Technical Advice Memoranda as a result of its examination of credit union UBIT activities in the first quarter of 2007. IRS believes that the Technical Advice Memoranda will more clearly articulate specific activities of state-chartered credit unions that can be subject to the tax and improve compliance with the statute. Not-for-profit entities that generate more than $1,000 in unrelated business income are required to disclose on Form 990 that they have generated such income and whether they have filed an Exempt Organization Business Income Tax Return (Form 990-T) with IRS declaring the income and related UBIT liability. State-chartered credit unions are required to file Form 990, and Form 990-T if they generate unrelated business income in excess of $1,000. However, according to IRS officials, an IRS ruling allows state regulatory authority in some states to file forms 990 on a groupwide basis; that is; one form can be filed on behalf of all state-chartered credit unions in a particular state. Thus, a state-chartered credit union located in one of those states could generate taxable unrelated business income without having to file a Form 990, individually. According to IRS officials, the group ruling applies to 34 states, and group returns in about 21 of those states have been filed in recent years. Of the 21 states, only 1 has asserted on the Form 990 that UBIT was generated in that state. IRS stated it would be able to positively verify if an individual credit union declaring unrelated business income in excess of $1,000 on its Form 990 also filed a Form 990-T. However, the agency currently does not have a process in place to review group returns to ensure that the credit unions filed the Form 990-T. As a result, IRS cannot systematically determine if credit unions that were included in group returns and generated unrelated business income properly filed a Form 990-T declaring such income and paid UBIT. According to IRS officials, the group exemption process was instituted to relieve IRS of the burden of individually processing a large number of applications from organizations sharing a common affiliation and that are operated for the same general purpose. For example, IRS noted that some organizations, including churches and veterans organizations, have a great number of subordinate organizations that are similarly situated. IRS officials agreed that requiring all state-chartered credit unions to file an individual Form 990 could enhance the agency’s ability to scrutinize the activities of individual credit unions to determine whether they were subject to UBIT. However, officials also noted that it was not clear if the benefits of eliminating the group filing exemption would exceed the costs—both to IRS as well as to the individual credit unions. Specifically, officials noted that credit unions that are currently included in group returns would each need to file for recognition as a tax- exempt organization and incur annual costs to prepare and file individual Form 990. Moreover, IRS officials noted that they expect that the Technical Advice Memoranda that the agency is planning to issue in early 2007 would improve credit union compliance with UBIT filing requirements. Federal credit union executive compensation is not transparent. Federal credit unions are not required to file reports, including the IRS Form 990 required for most other tax-exempt organizations that would provide information on executive and director compensation. NCUA legal opinions have stated that member access to credit union records is generally a matter of state law but that federal credit union members “have inspection rights similar to those enjoyed by a shareholder in a corporation” and that “the general rule in most jurisdictions is that a shareholder is entitled to inspect corporate minutes and other records as long as he has a proper, nonvexatious purpose.” However, we could not determine to what extent credit unions and credit union members were aware of this information. We identified a number of credit union and bank executive compensation surveys, but data and methodological limitations precluded us from making direct comparisons of executive compensation. NCUA has collected executive compensation information for federal credit unions as part of its efforts to assess who credit unions serve. The issue of transparency and disclosure of executive compensation has become an important topic, both for tax-exempt entities and publicly held companies. Credit union members bear some similarity to public company shareholders in that they are “owners” and vote for boards of directors that are entrusted to oversee executive compensation. The importance of disclosure of executive and director compensation was illustrated in recent changes adopted by SEC in July 2006 to increase transparency and disclosure by public companies and reflect the increasing focus on corporate governance and director independence. According to SEC, the objective was to provide investors with a clearer and more complete picture of the compensation earned by a company’s principal executive officer, principal financial officer, highest paid executive officers, and members of its board of directors. In contrast, credit union executive compensation is not transparent because credit unions are not required to file publicly available reports such as the IRS Form 990 that disclose executive compensation data. For tax-exempt organizations, IRS has noted that some members of the public rely on Form 990 as the primary or sole source of information about a particular organization. Most tax-exempt organizations with gross receipts that are normally more than $25,000 are required to file the Form 990 annually. IRS also uses these forms to select organizations for examinations. Figure 6 shows the compensation information collected on the Form 990. On August 23, 1988, IRS issued a determination that federal credit unions are not required to annually file a Form 990 because of their status as tax- exempt instrumentalities under section 501(c)(1) of the Internal Revenue Code. Also, as noted previously, some state-chartered credit unions file through a group filing process (21 states in 2004). For these states, IRS receives only the name and addresses of individual credit unions. As a result, scrutiny of the compensation of credit union executives and other key personnel is limited. Additionally, boards of directors of credit unions receive limited compensation because the directors serve in nonpaid positions. According to the Federal Credit Union Act, no member of a federal credit union board may be compensated; however, a federal credit union may compensate one individual who serves as an officer of the board. Although the credit union may not pay its board of directors a salary, it may provide or reimburse directors for such things as mileage; insurance, including life, health, and accident insurance; and travel expenses. In contrast, bank boards of directors may receive fees such as an annual retainer for serving on the board, profit sharing, professional fees, and other bonuses. Also, according to one bank survey, about half of the banks that responded indicated that their compensation fees were based strictly upon attendance. According to NCUA, executive compensation is assessed during the credit union’s examination to determine its reasonableness as it relates to safety and soundness concerns. As stated in our 2003 report, NCUA recently moved from an examination and supervision approach that primarily was focused on reviewing transactions to an approach that focuses resources on high-risk areas within a credit union. To complement the risk-focused approach and allow NCUA to better allocate its resources, the agency adopted a risk-based examination program in July 2001. NCUA officials explained that supervisory examinations, including reviews of credit union executive compensation, follow a risk-focused approach. The officials told us that examiners would review executive compensation in instances of safety or soundness concerns, such as compensation arrangements that significantly exceeded compensation paid to persons with similar responsibilities in credit unions of similar size and in similar locations. NCUA also stated that since it has not found a systemwide issue with executive compensation, it has not considered it necessary to collect or aggregate executive compensation data. Additionally, we found NCUA’s guidance on compensation similar in content with the Federal Financial Institutions Examination Council’s (FFIEC) Interagency Guidelines Establishing Standards for Safety and Soundness. At various times, credit unions and others have questioned whether members have the right to obtain and inspect credit union information, including salary data. NCUA legal opinions have stated that member access to credit union records is generally a matter of state law, and federal credit unions should look to the appropriate state corporate law. In a letter dated June 14, 1996, NCUA’s Associate General Counsel said that federal credit union members “have inspection rights similar to those enjoyed by a shareholder in a corporation” and that “state law determines the types of information and documents, and the degree of access, available to shareholders/members.” The letter stated that “the general rule in most jurisdictions is that a shareholder is entitled to inspect corporate minutes and other records as long as he has a proper, nonvexatious purpose.” However, it is unclear to what extent federal credit union members and credit union personnel are aware of a member’s right to inspect records, or how difficult or easy it would be for credit union members to obtain information such as salaries. We could not identify any surveys or studies that directly compared credit union executive compensation with compensation provided by similarly sized banks. However, we did identify a few credit union and bank trade group surveys that address executive compensation for their respective industries. Credit union and bank trade group officials told us that these surveys generally are used to help their industries gauge comparable pay by job title and an institution’s asset size. Several limitations exist that preclude us from directly comparing credit union and bank executive compensation. While these surveys provided information on the cash compensation by job title, institution, and asset size, the surveys did not provide detailed information on the other forms of compensation received to allow a direct comparison of credit union and bank executive compensation. Some benefits include items such as retirement plans, stock options (for bank executives), employment contracts, severance pay, and perks such as vehicle allowances. Due to the lack of consistency and availability of data beyond cash compensation in these surveys, it is difficult to make any overall comparisons between credit union and bank executive compensation. Other limitations to these surveys include, in some instances, low response rates for the three executive positions (chief executive officer, chief financial officer, and chief operating officer). Further, the data collected in these surveys were based on self-reported information from the survey participants. Appendix IV provides more detail on the survey limitations and the results of executive compensation for credit unions and banks that responded to their respective surveys. As mentioned previously, NCUA has collected credit union executive compensation data and reported compensation information on the top- three executive positions—chief executive officer, chief financial officer, and chief operating officer. NCUA collected 2005 compensation information from the IRS Form 1099 and Form W-2 (wages and salary data). NCUA officials told us that the sample size will enable them to project industry averages for the federal credit union population and be stratified into two statistically valid subsets based on the asset size of the credit unions surveyed. However, the NCUA effort provides a snapshot of federal credit union compensation for a single year, 2005, and it is unclear whether NCUA will conduct future reviews of credit union executive compensation. NCUA also suggested alternative methods of collecting compensation information and increasing the transparency of the information. During our review, NCUA indicated that it was considering amending the quarterly “call reports” that all federally insured credit unions are required to submit to NCUA to include compensation and benefit data for senior executive officers. Call reports are available for public inspection, and NCUA routinely reviews them. Currently, the call report collects only aggregate data on employee compensation and benefits. Additionally, NCUA officials indicated that requiring credit unions to disclose credit union salary information to members during public meetings would be another alternative for increasing the transparency of executive and director compensation. Since the passage of CUMAA and subsequent changes to NCUA regulations that permitted credit unions to serve larger geographic areas and enlarged fields of membership, community-chartered federal credit unions have grown in number and asset size. As a result, the common bonds of occupation or employment that traditionally existed between credit union members have become attenuated, blurring one of the historical distinctions between credit unions and other depository institutions. But, credit unions do retain distinctions in terms of structure and governance, and they retain their tax-exempt status. One perceived rationale for the credit union tax exemption, expressed by Congress, is the notion that credit unions serve individuals of small or modest means. Yet, it is difficult to determine to what extent credit unions actually serve individuals of modest means. Although NCUA has established programs to expand services to this group, the relative newness of the programs, combined with the absence of long-term, continuing, and systematic collection of data on the income of credit union members, currently preclude an assessment both of the programs’ effectiveness and overall industry performance. However, limited data (SCF) suggest that in both 2001 and 2004, credit unions had a smaller proportion of low- and moderate-income customers than banks. NCUA officials have noted that it may be too soon for data to fully reflect NCUA initiatives and industry activities and that growth in the community charter will allow credit unions to draw members from larger and more diverse populations, including people of modest means. While NCUA has taken steps to identify the income levels of credit union members, several limitations in NCUA’s data collection effort will make it difficult to fully assess the extent to which credit unions have been serving low- and moderate-income populations. Notably, the data will not stratify information about member incomes by specific charter types or identify the specific financial services that credit union members have been using. Obtaining more detailed information on credit union member income and the financial services they used could help NCUA track the performance of credit unions and help monitor progress over time. Furthermore, this information would provide Congress and the public with clear evidence that, as CUMAA notes, credit unions were accomplishing their “specified mission” of “meeting the credit and savings needs of consumers, especially persons of modest means.” However, the NCUA effort, while laudable, currently is confined to a pilot project. The value and utility of the information collected would be greatly enhanced if NCUA were to move beyond a pilot and continue the data collection effort and address some of the limitations of the pilot. Although state-chartered credit unions have increased the amount of UBIT paid in recent years, determining which credit union activities are subject to the UBIT is difficult. IRS is currently conducting examinations of state- chartered credit unions and plans to release technical advice early in 2007 that the agency believes will more clearly explain which credit union activities are subject to the UBIT. While state-chartered credit unions are required to file information returns (Form 990), the group that are filed constrain IRS’s ability to scrutinize credit union activities related to UBIT because they convey little information about individual credit unions. However, IRS is planning to issue technical advice describing specific state credit union activities that may be subject to the UBIT to help ensure state credit union payment of the tax. Finally, the transparency of executive compensation is an important issue for private and public companies alike. In the private sector, SEC’s recent efforts to increase the transparency of publicly held companies underscore the importance of enhancing accountability and greater disclosure of information. In contrast, credit union executive compensation is not transparent due to the lack of information available to the public. Increased public opportunities to review executive salaries would promote greater credit union accountability, similar to requirements for publicly held companies. While the Form 990 is an avenue for increasing both the quantity and transparency of publicly available information about executive compensation at credit unions, federal credit unions are not required to file the form. However, the public could be given other opportunities to review credit union activities. For example, NCUA could require all federally insured credit unions to include compensation and benefit data for senior executive officers in the call reports that are submitted on a quarterly basis—an option that NCUA officials indicated was under current consideration. Or, NCUA could require federal credit unions to disclose or make available credit union records, such as senior executive salary information, to members during annual meetings. To help ensure that credit unions are fulfilling their tax-exempt mission of providing financial services to their members, especially those of low or moderate incomes, we recommend that the Chairman of NCUA systematically obtain information on the income levels of federal credit union members to allow NCUA to track and monitor the progress of credit unions in serving low- and moderate-income populations. NCUA’s recent pilot survey to measure the income of credit union members could serve as a starting point to obtain more detailed information on credit union member income. Ideally, NCUA should expand its survey to allow the agency to monitor member income characteristics by credit union charter type, obtain information on the financial services that low- and moderate- income members actually use, and monitor progress over time. To increase the transparency of executive compensation and enhance accountability of credit unions, we recommend that the Chairman of NCUA take action to ensure that information on federal credit union executive compensation is available to credit union members and the public for review and inspection. To achieve this, NCUA may want to consider options such as requiring federal credit unions to include specific information on executive compensation in call reports or issuing regulations that would require all federal credit unions to make executive compensation information available to members of credit unions at annual meetings. We provided a draft of this report to the Chairman of NCUA and the Commissioner of IRS for their review and comment. We received written comments from NCUA that are summarized below and reprinted in appendix V. In addition, we received technical comments from IRS that have been incorporated into this report as appropriate. In its comment letter, NCUA indicated that the agency’s staff have recommended that the NCUA board consider taking actions consistent with the recommendations made in our report. NCUA, however, expressed concerns with certain important aspects of the draft report. In particular, NCUA stated in its letter that a meaningful comparison between federally chartered credit unions and other financial institutions should include an in-depth assessment of their structural and governance differences. NCUA also noted that the substantive differences among federal credit unions in charter types and fields of membership significantly impact, among other things, who credit unions serve and how they operate and provide services. We agree that there are important structural and governance differences between credit unions and other depository institutions, which are highlighted in the report. For example, page one of the draft and current report notes that credit unions, unlike banks, are (1) not-for-profit entities that build capital by retaining earnings (they do not issue capital stock); (2) member-owned cooperatives run by boards elected by the membership; (3) subject to field of membership requirements that limit membership to persons sharing certain circumstances, such as a common bond of occupation or association; and (4) exempt from federal income tax. Additionally, we agree that differences in charter types and fields of membership are important factors that should be considered in assessing who credit unions serve. However, as we note in the report, statistically reliable data on credit union members by charter type and field of membership were not available at the time of our review. The lack of this type of data was the primary basis for the report’s recommendation that NCUA systematically obtain information on the income levels of federal credit union members. We are encouraged by NCUA’s pilot effort to obtain information on the income levels of federal credit union members and continue to believe the value of the information collected would be greatly enhanced if NCUA were to continue its data collection efforts and address some of the limitations of the pilot. Specifically, NCUA’s data collection efforts could be strengthened by (1) providing benchmark data, such as general population income statistics or other appropriate measures, to allow comparisons with the data collected on the income levels of credit union members; (2) obtaining data on the extent of services offered by credit unions (e.g., free checking accounts, no charge ATMs, low-cost wire transfers, etc.) are being used by income category; (3) expanding the data collection effort to allow the results to be projectable by charter type; and (4) conducting the study on a systematic or periodic basis to assess the extent of progress over time. NCUA’s letter also stated that it was inaccurate and inappropriate to measure the success of federally chartered credit unions in serving persons of modest means by reference only to the low- and moderate-income categories associated with the Community Reinvestment Act. Specifically, NCUA noted that there was legal and historical evidence that the term modest means, as used by Congress in the context of the Federal Credit Union Act, is intended to include a broader range of individuals than those in low- and moderate-income categories. As we noted in the report, neither the Federal Credit Union Act nor NCUA have established definitions as to what constitutes modest means. Thus, we used the group consisting of low- and moderate-income households as a proxy for persons of modest means for the purposes of our analysis. This allowed us to use the definitions established for the Community Reinvestment Act as the basis for income categories used on our analysis. Our analysis not only included comparisons between credit unions and banks of low- and moderate- income households but also middle and upper income households for both the 2001 and 2004 SCF. This analysis shows that between 2001 and 2004 credit unions continued to serve a higher proportion of middle- and upper- income households and a smaller proportion of low- and moderate-income households than did banks. In its letter, NCUA noted that our income category benchmarks were inconsistent with the specific definitions of the CRA categories the other federal financial regulators used—specifically the use of national versus local median income for our benchmarks. Because the most comprehensive and statistically reliable data available on the income characteristics of credit union and bank customers at the time of our review—the Federal Reserve’s Survey of Consumer Finances—were nationally representative, we used national median income measures as the basis for our income categories whereas the categories used for the Community Reinvestment Act are based on more local measures. NCUA’s letter also expressed concerns about the reliability of conclusions reached using the Federal Reserve’s Survey of Consumer Finances data. Specifically, NCUA noted that the SCF was not designed for reliable income comparisons between credit union members and bank customers. As we noted in our draft and current report, we agree that the SCF was not specifically designed to conduct comparative analyses of income levels of bank and credit union customers; however, SCF provides the best data currently available to undertake such a comparison. As we reported in 2003, we analyzed the SCF because it is a respected source of publicly available data on financial institution and consumer demographics that is nationally representative and because it was the only comprehensive source of publicly available data that we could identify with information on financial institutions and consumer demographics. Moreover, our draft and current report noted limitations in SCF data that preclude drawing definitive conclusions about the income characteristics of credit union members. NCUA also provided additional detailed written comments as an enclosure to its letter, which we have reprinted in appendix V with our responses. As we agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the date of this letter. At that time, we will send copies of the report to the Ranking Member, House Committee on Ways and Means; other interested congressional committees and subcommittees; the Chairman, NCUA; and the Commissioner, IRS. We will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions concerning this report, please contact Yvonne D. Jones at (202) 512-8678. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. See appendix VI for a list of other staff who contributed to the report. Our report objectives were to (1) assess the effect of the 1998 Credit Union Membership Access Act (CUMAA) on federal credit union membership and charter expansion, (2) review the National Credit Union Administration’s (NCUA) efforts to expand credit union services to low- and moderate- income individuals, (3) compare rates offered by credit unions to comparably sized banks as one indicator of how tax-exemption might benefit credit union members, (4) discuss issues associated with the application of the federal unrelated business income tax (UBIT) to credit unions, and (5) assess the transparency of credit union executives and board member compensation. To study the impact of CUMAA on federal credit union membership and charter expansion, we reviewed and analyzed the legislative history for CUMAA and compared its provisions with NCUA interpretive rulings and policy statements in effect before and after the enactment of CUMAA. In addition, we interviewed NCUA officials and industry representatives and met with credit union and banking trade groups including the National Association of Federal Credit Unions, National Association of State Credit Union Supervisors (NASCUS), Credit Union National Association, America’s Community Bankers, and Independent Community Bankers to obtain their viewpoints on how CUMAA and NCUA regulation affected credit union chartering and field of membership. To obtain information about state credit union chartering and fields of membership, we held discussions and reviewed documentation provided by NASCUS. Finally, we obtained electronic files from NCUA that contained annual call report financial data (Form 5300) of all federally chartered credit unions for year- ends 2000 through 2005. The information included the number of credit unions, actual and “potential” membership (that is, people within a credit union’s field of membership but not members of the credit union), assets, charter approvals, charter conversions, and charter expansions. To identify the results of NCUA programs intended to expand credit union services to low- and moderate-income individuals and underserved areas, we analyzed NCUA call report data for the low-income-designated credit unions and credit unions that expanded into underserved areas for year- ends 2000 through 2005. The data included information on the number of credit unions participating in these programs, their asset size, and their membership. We reviewed NCUA-established procedures for verifying the accuracy of the Form 5300 database and found that the data are verified on a yearly basis, either during each credit union’s examination or through off-site supervision. In addition, we cross checked the December 2000 to December 2002 data that we recently received with the same data in our 2003 report. We determined that the data were sufficiently reliable for the purposes of this report. Further, we analyzed existing data on the income levels of credit union customers. Specifically, we analyzed both the 2001 and 2004 releases of the Board of Governors of the Federal Reserve System’s (Federal Reserve) Survey of Consumer Finances (SCF). The SCF is conducted every 3 years and is intended to provide detailed information on the balance sheets, pension, incomes, and demographics of U.S. households and their use of financial institutions. Because some households use both banks and credit unions, we performed our analyses based on the assumption that households can be divided into four user categories—those who use credit unions only, those who primarily use credit unions, those who use banks only, and those who primarily use banks. “Primarily use” banks (or credit unions) means placing more than 50 percent of a household’s assets in banks (or credit unions). As in our prior report, we created four income categories that are based on those used by financial regulators as part of Community Reinvestment Act examinations—low, moderate, middle, and upper—to classify these households (see table 5). As in our 2003 report, we were unable to find a definition of “modest means”; thus, to assess the extent to which credit unions served people of “modest means,” we combined households with low- or moderate-incomes into one group as a proxy for modest means. Finally we discussed with NCUA officials the design and methodology of its ongoing pilot project to measure the income levels of federal credit union members. We also discussed with NASCUS officials their effort to measure the income levels of state-chartered credit union members. To compare the rates of credit unions with those at comparably sized banks, we engaged the services of Datatrac Corporation—a market research, information technology company specializing in the financial services industry—to provide data on 15 loan and savings products offered by credit unions and banks. Datatrac calculated the average rates for each of these products by five distinct peer groups for asset size, for about 2,000 credit unions and 4,000 banks (see table 6). We established the peer groups based on the institution’s size as measured by total assets for banks and credit unions. Datatrac obtained asset information for each institution by combining information in its database with call report data for each institution. Datatrac computed average rates for institutions overall and for all institutions within analysis groups. In computing these simple averages, individual institution rates were not weighted to reflect loan volume or other measures of size. Datatrac provided us with an electronic file containing information for 2000 through 2005. The information included (1) institution type, (2) average rate, (3) maximum rate, (4) minimum rate, (5) standard deviations, (6) product name, (7) quarter and year, and (8) institution counts. We interviewed Datatrac officials to confirm that they followed industry accepted protocols to ensure data integrity, including input and processing controls. We also reviewed Datatrac’s methodological documentation. In addition, we conducted reasonableness checks on the data we received and identified data gaps in the year-end 2003 information. Datatrac examined its processing procedures and explained to us that its cut-off date was incorrectly designated 1 week later than planned. At the same time, Datatrac also verified that the same problem did not exist in any other quarters of the years 2000 through 2005. Datatrac provided us an updated electronic file reflecting the corrections. We determined that the revised data were sufficiently reliable for the purposes of this report. To review issues related to the application of UBIT to credit unions, we reviewed the legislative history of UBIT and the historical basis for the tax- exempt status of credit unions and met with representatives of the Internal Revenue Service (IRS) to discuss UBIT filing and reporting requirements. We also discussed with IRS officials their examinations of unrelated business activity at state-chartered credit unions and development of policies and procedures in this area. We also obtained information from IRS on the number and types (group versus individual) of Return of Organization Exempt from Income Tax (Form 990) and Exempt Organization Business Income Tax Return (Form 990T) filings by state- chartered credit unions and the amount of unrelated business income reported and taxes paid by state-chartered credit unions for tax years from 2000 through 2004. To provide information on the transparency and compensation of credit union executives and board members, including an assessment of the availability of compensation data to credit union members and a comparison of executive compensation at credit unions and comparably sized banks, we interviewed officials at NCUA and IRS to discuss executive compensation reporting requirements. We obtained and analyzed examiner guidance on compensation from NCUA and the other federal banking regulators—the Federal Deposit Insurance Corporation, Federal Reserve, Office of the Comptroller of the Currency, and Office of Thrift Supervision. We also met with credit union and banking trade groups including the National Association of Federal Credit Unions, NASCUS, Credit Union National Association, America’s Community Bankers, and Independent Community Bankers to identify publicly available data regarding the compensation of credit union and bank senior executives. We reviewed and analyzed selected credit union and bank compensation surveys. For more information on the surveys and our analysis, see appendix IV. We also met with NCUA to discuss their efforts to collect federal credit union executive compensation. Using the methodology that we employed in our prior report, data from the 2001 and 2004 releases of the Federal Reserve SCF that we analyzed indicated that credit unions continued to serve a lower proportion of low- income households than banks for the years analyzed. As we reported in 2003, we analyzed the SCF because it is a respected source of publicly available data on financial institution and consumer demographics that is nationally representative and because it was the only comprehensive source of publicly available data with information on financial institutions and consumer demographics that we could identify. While it is the best publicly available data that we could identify, there are limitations in SCF data that preclude drawing definitive conclusions about the income characteristics of credit union members. In an effort to provide greater context, in this appendix, we also present the results of additional analyses of the 2001 and 2004 SCF data that we conducted. The SCF is conducted every 3 years and is intended to provide detailed information on the balance sheet, pension, income, and other demographics of U.S. households and their use of financial institutions. The survey is based on approximately 4,500 interviews and represents a sample of more than 100 million households. For each of the 2001 and 2004 SCF releases, we combined the SCF data into two main groups— households that only and primarily used credit unions (credit union customers) and households that only and primarily used banks (bank customers). Our analyses of 2001 and 2004 SCF data indicated that, among households that used a financial institution, those households that we identified as being bank customers outnumbered those that we identified as being credit union customers by a large margin (see table 7). Because such a high percentage of the U.S. population represented by the SCF only used banks, the data obtained from the SCF are particularly useful for describing characteristics of bank users but much less precise for describing smaller population groups, such as those that only used credit unions. It should be noted that SCF was not specifically designed to conduct comparative analyses of income levels of bank and credit union customers, and the pool of bank customers is not necessarily comparable to the pool of credit union customers. We found that credit union customers had a higher median income than bank customers in both the 2001 and 2004 SCF releases. In the 2001 SCF, the median income of all households was $39,000; bank customers had a median income of $40,000 and credit union customers had a median income of $44,000. In the 2004 SCF, the median income of all households was $42,000; bank customers had a median income of $43,000 and credit union customers had a median income of $50,000. We computed the proportions of credit union customers and bank customers in each of four income categories—low, moderate, middle, and upper. As in our 2003 report, we based our income groups on income categories used by financial regulators for federal Community Reinvestment Act examinations in an effort to provide a consistent framework given that “modest means” is not clearly defined. For our primary analysis of 2001 and 2004 SCF data, we used 2000 and 2003 median household income as reported by the U.S. Census Bureau; for our additional analyses of 2001 and 2004 SCF data, we used 2000 and 2003 median family income as reported by the U.S. Census Bureau (see tables 8 and 9). It should be noted that the categories that we use here, which we introduced in our 2003 report, are based on a national median income measure whereas the categories used for Community Reinvestment Act are based on more local measures. As noted earlier in the report, our (primary) analysis of 2004 SCF data suggested that credit unions served a lower proportion of households of modest means (low- and moderate-income households, collectively) than banks, a result consistent with the finding in our 2003 report analyzing the 2001 SCF data (see tables 10 and 11). In an effort to determine how sensitive these results were to our income categorization, we also used the median family income for 2000 and 2003 to analyze the 2001 and 2004 SCF data. As shown in tables 12 and 13, the results from our additional analyses were similar to those of our primary analysis. While the median family income was higher than the median household income in each year, the results continue to suggest that a greater proportion of bank than credit union customers were of modest means. This difference between banks and credit unions was statistically significantly different from zero in the 2004 SCF; there was also a statistically significant decline in the proportion of credit union customers of modest means between the 2001 and 2004 SCF data. Thus, while the results of our analyses should not be considered definitive, they do suggest that any impact from the recent efforts by NCUA to increase credit union membership among the underserved and low- and moderate-income households have not yet appeared in the data. We also considered the median income of bank and credit union customers within each of our income categories for both the primary and additional analyses to assess whether there were any notable differences between credit union and bank customers (see tables 14 through 17). We found that the income characteristics of the customers tended to be similar; however, the median income in the upper-income category tended to be higher for bank customers. Data that we obtained indicate that credit unions offer more favorable rates on average than similarly sized banks for a number of savings products and consumer loans. However, similarly sized credit unions and banks appeared to offer virtually the same rates on mortgage loans, such as 15- and 30-year fixed-rate mortgages. We engaged the services of Datatrac Corporation—a market research and information technology company, specializing in the financial services industry—to gather and analyze data on loan and savings rates for 15 loan and savings products (5 consumer loan, 3 mortgage loan, and 7 savings products) that were offered from 2000 through 2005 at about 2,000 credit unions and 4,000 banks. Financial institutions voluntarily provide data to Datatrac on a weekly basis for inclusion in the company’s database. Therefore, information presented is not necessarily statistically representative of the entire banking and credit union industry. Datatrac calculated the average rates for each of these products by five distinct asset size peer groups: total assets of $100 million or less; total assets greater than $100 million, but less than or equal to $250 total assets greater than $250 million, but less than or equal to $500 total assets greater than $500 million, but less than or equal to $1 billion; total assets greater than $1 billion, but less than or equal to the asset size, rounded up to the nearest billion dollars, of the largest credit union. Datatrac computed average rates for institutions overall and for all institutions within analysis groups. In computing these simple averages, individual institution rates were not weighted to reflect loan volume or other measures of size. While Datatrac Corporation’s database contained data provided by about 2,000 credit unions and 4,000 banks, data were not always obtained from all the credit unions and banks for every product and/or time period in each of the five asset groupings. We identify all instances in which the information presented was based on rate data provided by less than 10 institutions. Additionally, because averages based on a small number of institutions may be unreliable, we did not report instances when rate data was provided by less than 5 institutions. Figures 7 through 23 provide a detailed comparison of rates on savings and loan products offered by credit unions to those at similarly sized banks for the 6-year period spanning from 2000 to 2005. Credit union and bank survey information we obtained provides an indication of executive base salaries for the respective industries. The credit union and bank salary survey data we identified had a key limitation—the information was not directly comparable because of differences in the underlying sampling strategies and data gathering methodologies. Also, while both surveys report the types of cash compensation received for their industry executives (i.e., salary and bonuses), we were not able to identify and compare other forms of benefits that an executive might typically receive in a compensation package. There were a number of other limitations in the data that we identified. In some instances, the information collected for each of the surveys involved a sample of members belonging to their respective trade group associations. The data collection periods for each of the surveys were different. For instance, the credit union survey collected salary information between January and May 2005, while the bank survey collected information during 2004. The bank survey also provides general information on other benefits such as savings incentive plans, pension plans, and paid time off benefits for which we do not have comparable information in the credit union survey. Also, cash compensation reported for the credit union survey includes base salary, incentives, and bonuses, while the bank survey reported base salary, bonus, and profit sharing compensation. Finally, the cash compensation information presented for these surveys are grouped in different asset size ranges. The credit union survey presents information based on 13 asset size categories, while the bank survey presents information based on 7 asset size categories. According to the Credit Union National Association’s 2005 to 2006 Complete Credit Union Staff Salary Survey, the average base salary of credit union presidents, chief executive officers (CEO), and managers for those credit unions responding increased 4.8 percent from the previous year’s survey. In addition to base salary, more than half (55 percent) of credit union presidents, CEOs, and managers also received other forms of cash compensation such as incentives or bonuses. For CEOs, incentives averaged $9,634, while bonuses averaged $4,993. The survey also noted that bonuses continue to be more common than incentives (45 percent compared with 5 percent receiving these payments, respectively in 2004). As shown in figure 25, the average credit union base salary for the CEO position was about $78,000 while the average base salaries for the chief financial officers (CFO) and chief operations officers (COO) was approximately $73,000 and $64,000, respectively. However, national averages should be viewed with care since executive salaries also vary by region and the size of the credit union. Similarly, according to the survey for those credit unions responding, credit union executives, including CFOs and COOs, experienced about a 2 percent increase in average salary over the previous year. Of those that responded to the survey, approximately 27 percent of CFOs received incentives, which averaged $5,963, and 38 percent received a bonus which averaged $4,650. Additionally, approximately 21 percent of COOs received incentives which averaged $5,678, while 47 percent received a bonus that averaged $3,578. The number of responses for the survey questions on the three credit union executive positions also varied from question to question and across the different asset categories. For instance, a total of 773 credit unions responded to the president/CEO question, but the responses by asset category ranged from a low of 16 responses by credit unions with assets of $1 to $2 million to a high of 113 responses by credit unions with assets of $100 to $200 million. A total of 330 credit unions responded to the CFO question, while the responses by asset category ranged from a low of 2 responses by credit unions with assets of $5 to $10 million to a high of 74 responses by credit unions with assets of $100 to $200 million. Finally, 268 credit unions responded to the COO question, while the responses by asset category ranged from a low of 2 responses by credit unions with assets of $5 to $10 million to a high of 65 responses by credit unions with assets of $100 to $200 million. According to America’s Community Bankers 2005 Compensation Survey, the national average base salary for those banks responding to the survey for CEOs was up 13.2 percent from the 2004 reported average. The average bonus/profit sharing payment for CEOs was $73,129. Similarly, the national average base salary for those banks responding for CFOs was up 10.8 percent from 2004, while the average bonus/profit sharing compensation was $28,700. The base salary for those banks responding for COOs was up 8.8 percent from 2004, while the average bonus/profit sharing compensation was $32,697. As shown in figure 26, the average bank base salary for the CEO position was about $213,000 while the average base salaries for the CFO and COO was approximately $121,000 and $141,000 respectively. As mentioned previously, national averages should be viewed with care since executive salaries also vary by region and by asset size. The bank executive survey responses also varied by the total number of respondents and by the different asset categories. For instance, a total of 358 banks responded to the president/CEO question, while the responses by asset category ranged from a low of 16 banks with assets up to $50 million to a high of 74 banks with assets of $101 to $200 million. A total of 256 banks responded to the CFO question, while the number of responses by asset category ranged from a low of 2 banks with assets up to $50 million to a high of 49 responses by banks with assets of $501 million to $1 billion. Finally, 187 banks responded to the COO question, while the response rates by asset category ranged from a low of 8 banks with assets up to $50 million to a high of 38 banks in both the $101 to $200 million and $501 million to $1 billion categories. Due to the small number of responses in some instances, the results of this data should be viewed with caution. 441 G St., NW Thank you for the opportunity to review and comment on the draft GAO Report (Report) entitled “Greater Transparency Needed on Who Credit Unions Serve and on Senior Executive Compensation Arrangements.” On behalf of the National Credit Union Administration (NCUA), I would like to express our appreciation for the professionalism of your staff and our gratitude for the dialogue that occurred through the course of GAO’s study. NCUA believes that dialogue was helpful in developing a better mutual understanding of the complexity of the issues addressed in the Report and the conflicts that arise when considering the mission and purpose of federal credit unions (FCU) in the context of today’s financial marketplace. It is unfortunate GAO did not have available at the time of drafting the Report the results of NCUA’s Member Service Assessment Pilot Program (MSAP) (Enclosure 1), since MSAP includes significant new data on FCUs. Importantly, MSAP provides meaningful and accurate information on FCU membership profiles, as well as an assessment of the data collected. This assessment is critical for an objective analysis of the data. It also demonstrates any conclusions reached must consider FCU structure and operations, and the significant differences between other financial institutions and FCU charter types. As outlined in greater detail in the enclosed response to the Report (Enclosure 2), NCUA does have continued concerns with certain important aspects of the Report. NCUA believes that a meaningful comparison between FCUs and other financial institutions must include an in-depth assessment of their structural and governance differences. Furthermore, comparisons among FCUs must consider charter types and field of membership differences. These substantive differences significantly impact who credit unions serve, how they operate and provide services, how they develop and maintain their net worth and working capital, and how they affect the continued viability of the FCU system. Such a framework is missing in the Report, thus limiting its reliability. moderate-income categories associated with the Community Reinvestment Act (CRA), as these categories only extend to families at or below 80 percent of the median income. There is ample legal and historical evidence that the term modest means, as used by Congress in the context of the FCU Act, is intended to include both below average wage earners and a broader class of working individuals generally. Further, GAO elected to use income category benchmarks that are inconsistent with the specific definitions of the CRA categories used by the other federal financial regulators. Using broad income categories and equating modest means to low- and moderate-income individuals precludes a valid assessment of the economic demographics of FCU membership. Additionally, NCUA has serious concerns about the reliability of conclusions reached using the Federal Reserve’s Survey of Consumer Finance (SCF) data. The SCF was not designed for reliable income comparisons between credit union members and bank customers. Other concerns, addressed in Enclosure 2, include the importance of credit union membership limits, the effects of recent trends in community chartering, and proper recognition of NCUA’s efforts to target services to lower income individuals. Regarding the recommendations made in the Report, NCUA staff recommended in MSAP that the NCUA Board consider whether it is appropriate to gather additional membership data to further enhance NCUA’s efforts in expanding credit union service to low- and moderate-income individuals. NCUA staff also recommended that the NCUA Board consider evaluating alternative approaches to collecting and aggregating executive compensation on an FCU system basis. Notwithstanding the continued concerns listed above and described in greater detail in Enclosure 2, I again want to emphasize our great appreciation for the efforts of your staff and their willingness to consider our concerns and engage in open and meaningful dialogue. J. Leonard Skiles Executive Director Enclosures: 1. Report to the NCUA Board on the Member Service Assessment Pilot Program (MSAP), dated November 3, 2006 2. NCUA’s Detailed Response to GAO’s Draft Report GAO-07-29 NCUA’s Detailed Response to GAO Draft Report GAO-07-29 “Greater Transparency Needed on Who Credit Unions Serve and on Senior Executive Compensation Arrangements” The following discussion addresses our primary concerns with the GAO Draft Report, GAO-07-29 (Report). These concerns include: (1) inaccurate use of low- and moderate-income as a proxy for modest means; (2) inappropriate use of income categories ostensibly based on CRA categories; (3) improper reliance on the Federal Reserve’s Survey of Consumer Finance; (4) insufficient discussion of the structure and framework of FCUs; (5) insufficient discussion of NCUA’s efforts to enhance service to low- and moderate-income individuals; and (6) incomplete data on executive compensation. 1. GAO’s Definition of “modest means” It is inaccurate for GAO to define the term modest means as only including low- and moderate-income individuals. To use a proxy definition for modest means, although convenient for drafting the Report, contradicts clear congressional intent and disregards important statutory mandates on whom FCUs can serve. NCUA strongly believes that using the terms modest means and low- and moderate- income individuals interchangeably creates confusion and a perception inconsistent with statutory intent and regulatory policies put in place to achieve that intent. While the Report recognizes in footnote 30 on page 26 that there is no commonly accepted definition of modest means, the following statement on page 6 equates low- and moderate-income to modest means: he Federal Reserve’s 2004 Survey of Consumer Finance (SCF) - - indicates that credit unions continued to lag behind banks in the percentage of their customers or members that were of low- and moderate-income households. Our analysis of the 2004 SCF indicated that 32 percent of households that only and primarily used credit unions were of modest means (emphasis added). . . . . The history of the Credit Union Membership Access Act of 1998 (CUMAA)demonstrates congressional intent when the term “modest means” was used. This term was first introduced in proposed amendments to the FCU Act in 1998 describing the mission of credit unions. Although these amendments were not adopted in the final version of CUMAA, the House Report accompanying the proposed bill noted: “Section 204 reaffirms the continuing and affirmative obligation of insured credit unions to meet the financial services needs of persons of modest means, including those with low- and moderate-incomes, consistent with safe and sound operation.” The Senate Report followed a similar usage in referring to section 204 of the bill. Specifically, the Senate Report also discussed the calling of credit unions to serve the entire range of membership and to provide “affordable credit union services to all individuals of modest means, including those with low- and moderate-incomes, within the field of membership of such credit union.” These congressional views reflect the clear understanding that the term modest means indicates a meaning broader than individuals with low- and moderate- income, and those that meet the definition of modest means must also be within the field of membership (FOM). In this respect, the term, though not specifically defined, conforms explicitly with its earlier counterpart, “small means,” as a shorthand reference to members of the broad working class. CUMAA also served notice that outreach programs to reach low- and moderate- income individuals, and the support for credit unions designated to serve low- income memberships, should still continue. Additional authorities granted to low- income designated credit unions, and the ability for multiple common bond FCUs to adopt underserved areas are also consistent with a more expansive definition for modest means. 2. Use of CRA-type definitions for income levels The Report, in footnote 27 on page 24, provides an explanation for the use of the Federal Reserve’s Survey of Consumer Finance (SCF) and income categories, and states: We based our groups on income categories used by financial regulators for federal Community Reinvestment Act examinations intended to encourage depository institutions to help meet credit needs in all areas of the communities that they serve: (1) a low-income household had an income of less than 50 percent of the national median household income; (2) a moderate-income had an income of at least 50 percent of but less than 80 percent of the national median household income; (3) a middle-income household had an income of at least 80 percent of but less than 120 percent of the national median household income; and (4) an upper-income household had an income of at least 120 percent of the national median household income. H.R. REP. NO. 105-472, at 22 (1998)(emphasis added). S. REP. NO. 105-193, at 11 (1998)(emphasis added). Enclosure 2 This footnote does not accurately reflect the income categories established by the federal financial regulators for CRA examinations and contradicts Table 5 in Appendix I of the Report. The income categories identified in the Code of Federal Regulations for CRA purposes are based on median family income as a percent of metropolitan statistical area (local area) median family income. The income categories utilized in the Report use median household income as a percent of national (not local area) median household income. Although the Report utilizes median family income in its additional analysis, this not only contradicts the SCF’s methodology, but also does not correct for CRA inconsistency. Consequently, the statement that the income levels used are similar to those used in other governmental programs is misleading and implies the analyses are based on CRA income categories when, in fact, the income categories are GAO-defined. Additionally, footnote 27 illustrates CRA is intended to “encourage depository institutions to help meet credit needs in all areas of the communities that they serve. . .” Given 80 percent of FCUs are occupational or associational based, the CRA-type categories have limited, if any, applicability for the assessment of FCUs. 3. Basing Assessment on the Federal Reserve’s Survey of Consumer NCUA recognizes the lack of reliable data to serve as a basis for valid conclusions regarding income distribution of FCU members at the time of the drafting of the Report. NCUA also accepts that the SCF was the only source of data available that provided income figures, albeit of limited application, for FCU members. As correctly pointed out by GAO, the SCF was not designed to analyze credit union member income distribution or make comparisons between credit union members and bank customers. For example, the SCF does not provide proportional representation of credit union members and bank customers necessary to develop valid conclusions pertaining to income distribution. Notwithstanding these known deficiencies, the SCF is the primary source for the conclusions reached in the Report, which has the potential for misleading assessments about whom credit unions serve compared to banks. See 12 C.F.R. §§ 228.12(b) and (m)(Federal Reserve), 345.12(b) and (m)(FDIC), 25.12(b) and (m)(OCC), and 563e.12(b) and (m)(OTS). The number of households primarily using credit unions included in the 2004 SCF is only 14 percent of those surveyed. The number of FCU member households included in this small number is unknown. See page 63 of the Report. Enclosure 2 comparisons, they are also insufficient for providing a comprehensive view of member incomes. The use of additional tables, in the body of the report, depicting the same data in various ways would have allowed a more complete view of member incomes. For example, the Federal Reserve uses income percentiles in its assessment of the SCF, which provides a more objective presentation of income distribution than the broad income categories used in the Report. Table 1 presents the data used in the Report based on these income percentiles. Banks 19.2% 20.0% 20.7% 20.2% $89,301 to $129,400 11.5% 18.8% 23.2% 24.8% 13.2% Further, including both average and median incomes for comparative purposes, rather than using only median as reflected in the Report, provides for a more complete view of member incomes. According the SCF results and as demonstrated in Table 2, while credit union members have the highest median income, bank customers have the highest average income. Table compiled by NCUA to illustrate other alternatives for SCF data analysis. Table compiled by NCUA to illustrate other alternatives for SCF data analysis using the median income included in the Report. Enclosure 2 family income for comparison is inconsistent with the SCF methodology, which utilized median household income. Therefore, this comparison does not add validity to the results of the study since it only changes the comparative benchmark. 4. Providing a limited framework for credit union membership assessment Although the Report correctly recognizes that credit unions retain their distinction in terms of structure and governance, it does not provide a framework that would allow for an appropriate interpretation of the assessments presented. For example, factual background information about credit unions and their important differences from banks, which is vital for an understanding of this issue, is not adequately addressed. To fully understand and assess any data that attempts to compare credit union members with depositors in other types of financial institutions, the Report should include discussion of the following: A. Statutory limitations on FCU membership MSAP data confirms the importance of the statutory mandate concerning common bond when assessing membership profiles. It also confirms that comparisons with other financial institutions, as well as among different charter- types of FCUs, are difficult. FCUs are chartered as cooperatives to serve individuals only within their FOM. They are, therefore, limited in whom they can serve and are restricted to the income composition of the individuals within their allowed FOM. It is misleading to draw definitive conclusions about the success of FCUs in serving individuals and groups outside their traditional membership base without fully focusing on their authorized FOMs. This is particularly important in view of the fact that, as of December 2005, approximately 80 percent of all FCUs had single-or multiple-common bond charter types based on occupation or association. The implication of this FOM concentration, based primarily on working individuals, is far reaching within the context of assessing the membership profile of FCUs. Understanding statutory limitations on who can join FCUs is critical in conducting an objective assessment of the FCU system membership profile, any policy consideration on who benefits from credit union services, and the impact of FCUs on the financial sector. The statutory limitations also emphasize the differences between FCUs and banks and draw into question the reasonableness of any general comparison between income distribution of FCU members and bank customers. To conduct a reasonable comparative assessment of whom FCUs and banks serve, both types of institutions would need to have a similar structure and other characteristics. Although community-chartered FCUs and community banks may share some similarities relative to location, structurally, community- chartered FCUs remain cooperatives with the limitations of building capital/net worth, geographic constraints, and numerous other restrictions. B. Composition of the FCUs The Report provides an extensive review of the characteristics and growth patterns of community-chartered FCUs. However, the proportion of FCUs that are community-chartered, the need for charter conversions to ensure continued viability, and the challenges community-chartered FCUs face when converting from a single or multiple common bond charter, as well as other issues, are not thoroughly addressed. For example: 1. Despite recent growth in FCU community charters, they still only represent approximately 20 percent of FCUs and 30 percent of FCU membership. This is a significant portion of the FCU system, but, as noted in the Report, this growth has primarily been within the last five years. Additionally, it should be emphasized that much of this growth is a result of FCUs converting from an already existing occupational or associational FOM. Instead, the Report concentrates on the growth of this subset when characterizing the entire FCU system, in particular the perceived “blurring” of the distinction between FCUs and other depository institutions. 2. A thorough assessment of the causes for the recent community charter conversions is not provided. The primary reason for these conversions has been to ensure continued viability of FCUs in changing economic and financial industry environments. A review of several examples documents this point. Clearview FCU (formally US Airways); Bethpage FCU (formally Grumman); JAX FCU (formally Jacksonville Naval Base); and New Cumberland FCU (formally New Cumberland Army Depot Defense Distribution Center) all converted to community charters in response to changes in their primary sponsors. 3. The time necessary to successfully implement a different business model when converting to a community charter is not adequately addressed. This is critical since the cutoff for the SCF data is 2003, yet the period under review extends to and includes 2005. Consequently, the SCF does not allow for an assessment of any appreciable changes based on the recent growth of FCU community charters, as the majority of the conversions to a community charter have occurred since 2000, and 192 have occurred since 2003. Since the growth of community charters is discussed at length, it should also be fully explained that relative to the overall issue of reaching out to low- and moderate-income individuals, the impact of this growth can not be expected to be represented in the SCF data. Because the SCF does not overlay the time period of the review, its relevance is further diminished. 4. The intent of NCUA’s regulations pertaining to community charters is not accurately described. On page 1 the Report states: “As a result of recent Enclosure 2 legal and regulatory developments, field of membership requirements for credit unions have been relaxed – member groups now can include anyone who lives, works, worships, or attends school in areas as large as whole counties or major metropolitan areas.” This statement suggests that the affinity requirement (lives, works, worships…) of NCUA’s field of membership rules and the geographic limits on community charters are recent developments. That suggestion is not accurate. Both of these NCUA regulatory policies predate CUMAA. NCUA did grant community charters prior to CUMAA that encompassed whole counties and metropolitan areas. It is true that the documentation requirements for single political jurisdictions were reduced through regulatory amendments that post-dated CUMAA, but that change was based on NCUA’s experience in chartering communities constituting a single political jurisdiction. 5. The size and extent of the community charters approved by NCUA are not appropriately represented. By using the approval of Los Angeles County, on page 13, as an example of a community charter conversion, it misrepresents the size of the community charter conversions commonly authorized. The data provided to GAO reflects the average population size for those community charter conversions approved during the period from 2000 to 2005 was 304,886, and the median size was 125,000. 6. The Report states in the Highlights, as well as on page 6 and elsewhere, that NCUA’s change in chartering policy is “triggered partly by concerns about competing with states with more expansive credit union chartering rules. . .” It is inaccurate to indicate that FOM parity with state-chartered credit unions is a primary objective when revising FOM policies for FCUs. Although this issue has surfaced during the regulatory comment period on proposed policy changes, it has not been a factor in NCUA’s policy making. C. The size and market share comparison of credit unions and banks Although the Report attempts to compare credit unions to banks, it does not provide a framework for an objective analysis, which, in addition to the membership limitations discussed above, should reflect the relative industry position of the two types of financial institutions. As with all institutions in the financial industry, FCUs have evolved to ensure their continued viability. Since 1934, dramatic changes in the overall economic environment in which FCUs must operate have occurred. These changes have required that FCUs adapt in order to meet the financial needs and expectations of their members. Specifically, in the last forty years, changing demographics in the United States were characterized both by the loss of numerous well-paying blue collar jobs in the manufacturing sector and an increasing disparity in the Enclosure 2 income range between persons in the working class and the upper class. Operational evolution can be seen at several levels, including the offering of a wider range of services to a more broadly defined FOM. Fundamentally, however, even though some FOMs are broader today, FCUs have adhered to and preserved the integrity of both the common bond and their cooperative structure, which is reflected in regulatory policies. In addition, the types of services FCUs now increasingly offer have changed. As with the common bond, FCUs have found it necessary to adapt in order to meet member expectations and demand for products and services. On page 1 the Report states “credit unions are now allowed to offer many products and services similar to those provided by banks, such as real estate and business loans.” Such a conclusion, however, fails to adequately assess the changing economic environment. Further, this statement misrepresents the services credit unions have historically provided. FCUs, for example, have been offering member business loans since their inception, often providing loans to entrepreneurs initiating a small business. As to the issue of mortgage lending, the FCU Act first authorized mortgage lending for FCUs in 1978. State-chartered credit unions in several states, most notably in the New England area, have provided this type of lending since the 1950s. In regard to rate comparisons, the Report recognizes the rate differences between banks and credit unions on savings and lending products. However, it should further recognize the interest rate environment during the period of the GAO review when interest rates were at historic lows. An assessment of the interest rate environment alone may have explained the reason for the decreasing gap in the rate paid on savings. This analysis is also crucial in assessing the mortgage rates since these loans of long-term maturity significantly affect the asset/liability management and ultimately the safety and soundness of a financial institution. Additionally, as shown in Table 3, credit unions are an important, but relatively small, segment of the financial industry. This size disparity draws into question the appropriateness of the comparison and conclusions in the Report. Credit Unions Insured by NCUSIF (Federally-Insured Credit Unions) NCUA also has concerns relating to the asset groups used in the Report for the comparison between banks and credit unions. The smallest group size used for comparative purposes in the Report is $100 million or less in assets. It is not disclosed, however, that approximately 88 percent of FCUs fall into that category, with 80 percent having assets less than $50 million as of September 30, 2005. It should also be noted that the average asset size of FCUs is $73.2 million with the median asset size just $11 million. 5. NCUA’s efforts to target credit union services to low- and moderate- One of GAO’s stated objectives was to review NCUA’s efforts to expand credit union services to individuals of low- and moderate-income. The Report correctly focuses on two principal programs in this context: (1) NCUA’s Low-Income Credit Union (LICU) program; and (2) NCUA’s strategic efforts to encourage FCUs to expand services into specifically designated underserved areas. It also correctly notes that NCUA’s support for these programs has resulted in increased participation in both programs by FCUs in recent years. Information obtained from FDIC Statistics on Banking: A Statistical Profile of the United States Banking Industry as published by FDIC, Division of Insurance and Research, for 2003, 2004, and 2005. Information obtained from Yearend Statistics for FICUs as published by the National Credit Union Administration for 2003, 2004, and 2005. 115 Cong. Rec. S13997 (May 27, 1969) (statement of Sen. Scott). membership base must necessarily be different, and broader. Although Congress recognized the difference, it did not believe an amendment to the overall statutory purpose for FCUs, which at that time was service to persons of “small means,” was required. Instead, Congress implicitly endorsed FCU service to the traditional membership base and specifically directed that NCUA should supply its own definition of low income for purposes of implementing the provisions of the new law. By regulation, NCUA did so, specifying that the term low income means individuals who make less than either 80 percent of the average for all wage earners, as established by the Bureau of Labor Statistics, or whose household income is at or below 80 percent of the national median household income as established by the Census Bureau. To qualify for low-income designation, a credit union must have more than 50 percent of its membership consisting of individuals defined as low income. This was a specific initiative by NCUA to recognize credit unions that predominately served a low-income population but were challenged in providing additional services and/or programs to their members. This initiative opened opportunities for these credit unions to obtain additional capital from philanthropic organizations and assistance from the Department of the Treasury’s Community Development Financial Institution Fund (CDFI), the NCUA’s Community Development Revolving Loan Fund (CDRLF), and other organizations to enhance and expand services to the low-income population. Page 20 of the Report accurately describes the other unique characteristics of LICUs and correctly notes LICUs grew in number between 2000 and 2005, from 632 to 1,032, a 63 percent increase. This result was achieved with NCUA’s vigorous encouragement and evidences dramatic success in NCUA’s effort to increase service to low-income members. Although NCUA has not collected income and service usage data, the descriptive analyses conducted by NCUA on the data collected in MSAP reflect LICUs and FCUs with underserved areas are serving a relatively greater proportion of low- and moderate-income individuals than the FCU system as a whole. 12 C.F.R. § 701.34(a)(2). As originally implemented, NCUA’s rule used 70 percent of median as the relevant percentage indicator of “low income.” The rule was changed to its current usage of 80 percent in 1993. Enclosure 2 is contrary to congressional intent, inhibits the ability of both types of FCUs to increase service to low- and moderate-income individuals who are outside the credit union’s FOM. In addition, the Report on page 22 uses Washington, D.C. as an example of an underserved area approved by NCUA without regard to location. This presentation is misleading. It is not explained that once an FCU identifies an area meeting the underserved requirement, as defined in Section 103(16) of the Community Development Banking and Financial Institutions Act of 1994, it must apply to NCUA to add the area to its field of membership. A detailed marketing plan, emphasizing how the FCU plans to reach out and serve all individuals in the underserved area, must be submitted. A detailed business plan must also be submitted indicating how the FCU will meet the needs of the individuals in the underserved area by describing the products (e.g., free checking, micro-credit loans) and services (e.g., bilingual staff, financial education seminars) the credit union offers or is planning to offer. Once approved to serve a specific underserved area, the credit union must maintain or open an office or service facility in the underserved area within two years. Other outreach initiatives by NCUA to increase service to underserved individuals have not been sufficiently acknowledged or described. NCUA has initiated several programs focused on assisting LICUs and on providing all credit unions with best practices to consider when converting to community charters or adding underserved areas. Since 1987, NCUA has administered the CDRLF, which was established by Congress, to provide technical assistance grants and low-cost loans for any LICU interested in enhancing service to its membership. Under NCUA’s auspices, the CDRLF has granted 273 loans totaling $40.5 million, and 1,923 grants totaling $5.8 million. In addition to the CDRLF, the Access Across America initiative, announced in February of 2002, incorporated NCUA’s activities for small and low-income designated credit unions, as well as those FCUs adopting underserved areas. The program was designed to partner with federal government agencies and other organizations to identify and facilitate use of resources available for credit unions to assist in their efforts to serve low- and moderate-income individuals. Workshops continue to provide partnering opportunities with federal government agencies, as well as non-profit and private organizations. This initiative has resulted in NCUA entering into Memoranda of Agreement with the Internal Revenue Service, Operation Hope, and the Department of Agriculture, each of which committed to provide assistance in sharing opportunities with participating credit unions. Moreover, NCUA maintains a working relationship with the Department of Health and Human Services, CDFI, and Fannie Mae to provide opportunities for credit unions to expand the products and services particularly useful to those members with low- and moderate-incomes. Pub. L. 103-325, 108 Stat. 2163 (Sept. 23, 1994)(codified at 12 U.S.C. §§ 4701 et seq.). Enclosure 2 As an adjunct to the Access Across America initiative, the Partnering and Leadership Successes program was introduced in 2003 to provide best practices in serving members and marketing to potential members in all credit unions, especially in underserved areas and communities. The agency coordinates widely attended workshops where a mix of credit unions present programs focused on serving low- and moderate-income individuals. A few of these programs include partnering opportunities with the Neighborhood Reinvestment Corporation, Latino outreach, and micro-business lending opportunities with the Small Business Administration. In conjunction with these workshops, numerous Letters to Credit Unions have been published that augment the workshops, providing information to the credit union system about opportunities available to enhance service and marketing to individuals in underserved areas. Two early examples of these letters include the February 2002 Letter to Federal Credit Unions, Letter No. 02-FCU-02 titled Partnership Opportunities with IRS, which introduced the credit union system to the Volunteer Income Tax Assistance program, and the September 2001 Letter to Federal Credit Unions, Letter No. 01-FCU-06 titled Financial Education Curriculum, which announced FDIC’s new Money Smart Financial Education Curriculum. The overall objective of NCUA’s initiatives is to provide increased opportunities for FCUs to diversify their membership profile and to assist small and low-income designated credit unions as they manage their operations in compliance with the increasing number of complex laws and regulations. If successful, the viability of some low-income designated FCUs will be preserved, thus further enhancing the opportunity for low- and moderate-income individuals in their FOM to join and participate in the financial services offered by small and low-income designated FCUs. Each of these initiatives was in direct response to CUMAA. But these types of initiatives have long been a part of NCUA’s, or its predecessor agency’s, regulatory fabric. There have been others, such as the 1960s era initiative, undertaken jointly with the Office of Economic Opportunity, to establish FCUs to serve low-income communities, the drive to increase the number of LICUs, and the regulatory encouragement to add underserved areas. More recently, in 1993, NCUA created the Office of Community Development Credit Unions which is dedicated to ensuring the long-term viability of small and low-income designated credit unions. Today this activity is handled by the Office of Small Credit Union Initiatives (OSCUI), which has expanded considerably in terms of staff, resources, and programs. NCUA Home Page – http://www.ncua.gov – Letters to Credit Unions, 2001 to 2005. Enclosure 2 2006 to date, OSCUI has held fifteen national workshops covering subjects such as establishing financial literacy programs, disaster recovery planning, and compliance with the Bank Secrecy Act. In addition to the national workshops, OSCUI coordinates with NCUA’s regional offices to conduct smaller roundtable training sessions focused on the needs of small and low-income designated credit union officials. 6. Transparency of Executive Compensation NCUA agrees with the conclusion that credit union executive compensation is not readily transparent. Absent compensation information captured by IRS Form 990, it can be difficult for FCU members to ascertain the exact compensation and benefits received by their executives. In the past, NCUA, while not objecting to disclosure of this information, has deferred to applicable state law on whether compensation and benefit information should be disclosed. As the Report points out, staff have indicated more efficient methods to capture and disseminate executive compensation information in lieu of filing Form 990. Such methods include: (1) amending NCUA’s regulations to require FCUs to include executive compensation information in their annual reports; (2) requiring the reporting of such information in NCUA’s quarterly call reports; or (3) amending the standard FCU Bylaws to require disclosure of compensation information during an FCU’s annual membership meeting. These and other methods may be considered by the NCUA Board in evaluating the transparency of executive compensation. While NCUA agrees FCU executive compensation is not readily transparent, several matters in the Report warrant clarification. They include: 1. Despite the absence of a standardized reporting mechanism, NCUA does not ignore the issue of executive compensation. Contrary to the implication on page 45, NCUA does assess executive compensation during the examination process primarily to determine its reasonableness as it relates to safety and soundness. There has never been a system- wide issue relating to executive compensation. As such, NCUA has not considered it necessary to collect or aggregate executive compensation data. 2. On Page 42, it is implied that MSAP is deficient because it does not collect executive compensation information for banks, thereby preventing a direct comparison between FCUs and banks. However, it is not within NCUA’s authority to collect data from banks or thrifts. Additionally, since this is not a safety and soundness issue for the credit union system, NCUA’s authority to collect executive compensation extends only to FCUs. 3. Comparing executive compensation of FCUs and banks was not a stated objective for GAO’s study. Attempting to make a direct comparison is not Enclosure 2 only irrelevant to the issue of transparency, but is impossible given the differences in the forms of compensation available to FCU versus bank executives. For example, as the Report notes, stock options and stock bonuses are routinely paid to bank executives, but are unavailable to credit union executives. Nevertheless, the discussion of this matter seems to imply that somehow credit union executive compensation may be askew. Only by delving into the data provided in Appendix IV of the Report is it clear that credit union executives on average make significantly less than their banking counterparts. 4. Since the Report addressed comparisons between senior officers of credit unions and banks, it should have also included a more detailed comparison between directors of credit unions and banks. It neither discusses nor includes any data regarding the compensation paid to directors of banks, which in some instances can be rather lucrative. At least some discussion would have been appropriate, especially since FCU boards are comprised of volunteers. Including such data and discussion would have made for a more thorough and accurate comparison of executive compensation. 5. The Report states on page 48 that MSAP will not stratify executive compensation by asset size of credit unions. This is not accurate. MSAP compensation data can be stratified into two statistically valid subsets based on asset size of the credit unions surveyed. In addition, limited descriptive conclusions can be derived from the data about other asset subgroups. As referenced in MSAP and this response, NCUA recognizes the difficulty in addressing the issues of membership profiles and the transparency of executive compensation in the absence of comprehensive data. NCUA also understands that the time allotted for completion of the Report did not allow for consideration of the MSAP data and similar data being compiled by NASCUS. Although the Report includes significant new detail and qualifies its reliance on the SCF, NCUA anticipates the general conclusions reached will be reported without the appropriate qualifiers. In order to assure a complete and thorough understanding of the FCU system, NCUA suggests that GAO include in its Report the information and data contained in MSAP. It is also suggested that the completeness of the Report would be further enhanced by inclusion of the data now being collected by NASCUS, thus allowing for a thorough assessment of the entire credit union system. Pursuant to the FCU Act, no member of an FCU board may be compensated; however, an FCU may compensate one individual who serves as an officer of the board. For example, if the credit union’s paid CEO is also a member of the board. See 12 U.S.C. §§ 1761(c) and 1761a. The following are GAO’s comments on the National Credit Union Administration’s letter dated November 14, 2006. 1. As noted in NCUA’s letter, we did not receive the results of its pilot survey on the membership profile of federal credit unions (Member Service Assessment Pilot Program) in time to include it as part of our study. The report can be found at NCUA’s website www.ncua.gov. 2. NCUA questioned GAO’s use of low- and moderate-income as a proxy for the term modest means. As we note in our 2003 and current report, neither the legislative history of the Federal Credit Union Act, as amended, nor NCUA have established definitions as to what constitutes modest means. As a result, we used the low- and moderate-income categories that we defined in our 2003 report, which are based on what the other federal financial regulators use for Community Reinvestment Act purposes, as a proxy for modest means. Moreover, both citations identified by NCUA in the House and Senate reports for the bill that ultimately was enacted as CUMAA specifically identify low- and moderate-income as components of what is referred to as modest means. We agree that the term modest means also indicates a meaning broader than individuals with low- and moderate-income. Further, our analysis included comparisons between credit unions and banks of households with middle- or upper-incomes. This analysis showed that between 2001 and 2004 credit unions continued to serve a higher proportion of middle- and upper-income households and a smaller proportion of low- and moderate-income households than did banks. 3. NCUA stated that the text in footnote 27 of the draft report did not accurately reflect the income categories that the federal financial regulators established for CRA examinations. The text in question has been moved up into the body of the report and modified to more clearly state that our categories were based on, but not identical to, that used by the other federal financial regulators for CRA purposes. The primary difference between our income categories and those used for CRA purposes was the use of national median income rather than local metropolitan statistical area median income as a benchmark for the various income categories. We use the national measure since the SCF is a national survey. Further, we agree with NCUA’s assertion that occupational and associational based credit unions have restricted membership bases, which limit their ability to serve all income categories. However, as we note in the report, although the number of credit unions with single or multiple common bonds have been decreasing since 2000 and the number of credit unions with more inclusive community charters have been increasing, 2001 and 2004 SCF data indicated that credit unions continue to serve a higher proportion of middle- and upper-income households than banks. 4. NCUA questioned our use of SCF data as the primary source for conclusions reached in the report regarding the income characteristics of credit union members. We believe that the report as stated clearly outlines the limitations of SCF data in conducting the analysis, but as we noted in our prior report, the SCF is the only source of comprehensive data to conduct such an analysis. We agree that there are other ways of analyzing and presenting these data. However, we believe that figure 2 in our report provides a valid comparison of bank and credit union customers in the SCF data. In addition, it uses the methodology of our 2003 report, which allows us to directly compare the results of our 2003 report with our current report. We focus on the median income, as we did in our prior report, since this measure is less susceptible to the influence of extreme values than the mean. As noted in the report, we performed an additional analysis using the median family income to provide additional context to our analysis within the same methodological framework. 5. NCUA suggested that our report does not provide a framework for understanding the effect of statutory limitations on federal credit unions when comparing the income distribution of federal credit union members and bank customers. We explicitly acknowledged the importance of these limitations in our 2003 report and have added some additional text to reflect these limitations in our current report. Nevertheless, we believe that our analysis of SCF data on the income levels of credit union members versus bank customers provides important contextual information on the extent, if any, that credit union members are different from individuals that use banks. The lack of data on the income distribution of credit union members by charter type was one of the primary factors behind our recommendation that NCUA expand its pilot survey to allow the agency to systematically obtain and monitor credit union member income data by charter type. 6. NCUA stated that the report does not thoroughly address the proportion of federal credit unions that are community chartered. We believe our report addresses this issue correctly, as originally presented. Both in table 1 of our report and the related text, we note that despite the growth in community charters, multiple-bond credit unions remain the largest group of federally chartered credit unions in number, total membership, and assets. However, as we noted in our report, it is important to emphasize that community-chartered credit unions overtook multiple-bond credit unions as the largest of the three federal charter types, in terms of average membership and average size in terms of assets, beginning in 2003. 7. NCUA stated that the report does not thoroughly address the agency’s position on the need for charter conversions to ensure continued viability. We believe our report addresses this issue correctly, as originally presented. As noted in our report, we attributed to NCUA some of the causes for growth in the community charter, including the agency’s belief that community charter expansion allows federal credit unions to attract a more diverse membership base that can enhance a credit union’s economic viability or safety and soundness as well as provide greater opportunities to serve members of modest means. We further note in our report that NCUA explained that single- and multiple-bond credit unions often tend to be organized around employer or occupationally based associations, which in turn creates greater economic risk exposure since the membership base is intertwined with the economic cycles of a particular employer or occupation. Finally, we cite a Federal Reserve Bank of Atlanta research paper, which concluded that there are material benefits of credit union membership diversification and that these benefits derive from expanded investment opportunities and reduced concentration risk. 8. NCUA stated that the time necessary to successfully implement a different business model when converting to a community charter is not adequately addressed. We believe our report addresses this issue correctly, as originally presented. Specifically, the report cites NCUA’s belief that it would take time for any results to appear in the SCF data as credit unions seeking to expand into new areas and reaching new types of customers would face a learning curve in their efforts. Our report further notes that the latest available data from SCF are 2-years old, so any more recent changes would not be reflected in our analysis. 9. NCUA stated that the intent of NCUA’s regulations pertaining to community charters was not accurately described. Specifically, NCUA stated that introductory text in the draft report suggested that the affinity requirements of NCUA’s field of membership rules and the geographic limits on community charters are recent developments. NCUA noted that both of these regulatory policies predated CUMAA. We have clarified the text of our report to reduce the potential for confusion by stating that since the passage of CUMAA, NCUA has approved progressively larger geographic-based fields of membership. 10. Text has been added to reflect the average and median population size of community charter conversions approved from 2000 to 2005. 11. NCUA stated that we inaccurately attributed its change in chartering policy as being triggered partly by concerns about competing with states having more expansive credit union chartering rules. As we reported in 2003, NCUA stated to us at that time that a major reason for its regulatory changes was to maintain the competitiveness of the federal charter in a dual (federal and state) chartering system. In subsequent discussions with NCUA they indicated that it would be more accurate to attribute changes in chartering policy to factors such as the continued viability of federal credit unions in changing economic and financial industry developments. We have modified the text of our report to reflect the influence of these factors. 12. Text has been added to reflect that credit unions historically have had the ability to offer real estate and business loans. 13. Text has been added to the report to recognize that interest rates during the period of our credit union and bank rate analysis were at historic lows. 14. Text has been added to the background section of the report based on the information provided by NCUA in its comment letter regarding the proportion size of the credit union industry in comparison with other federally insured depository institutions and the relatively small size of most federally chartered credit unions. However, it is important to note that the disparity in size between the credit union and banking industries does not affect our rate analysis methodology or our conclusions since that analysis is broken out by asset groupings, starting with institutions with assets of $100 million or less. 15. NCUA stated that it was inaccurate and inappropriate to use its Low- Income Credit Union program and underserved area expansion program to define and assess service to people of modest means. As noted previously, we used low- and moderate-income as a proxy for modest means due to a lack of a legislative or regulatory definition or other criteria. Moreover, we note that NCUA’s regulations for its underserved program includes criteria (area in a metropolitan area where the median family income is at or below 80 percent of the metropolitan area median family income or the national metropolitan area median family income) that is roughly similar to that used to define low- and moderate-income for CRA purposes (less than 80 percent of the median family income for the Metropolitan Statistical Area). 16. We clarified in the report that both single-bond and community credit unions are currently not permitted to include underserved areas in their fields of membership. As noted in the report, the American Bankers Association contended that the Federal Credit Union Act allows multiple-bond credit unions, but it does not specifically identify single or community credit unions to add underserved areas to their field of membership. 17. We added additional information in the report on NCUA’s criteria for federal credit unions applying to include underserved areas in the credit union’s field of membership. However, we disagree with NCUA’s assertion that the example we provided in our report is misleading. 18. We clarified in the report that NCUA examiners assess executive compensation during the examination process primarily to determine its reasonableness as it relates to safety and soundness, but that since it has not found a systemwide issue with executive compensation, NCUA has not considered it necessary to collect or aggregate executive compensation data. 19. NCUA noted that our characterization of NCUA’s Member Service Assessment Pilot implies that the pilot is deficient because it does not collect executive compensation information for banks; thereby, preventing a direct comparison between federal credit unions and banks. It also noted that it is not within NCUA’s authority to collect data from banks or thrifts and that its authority to collect executive compensation data extends only to federal credit unions in the context of credit union safety and soundness issues. We do not intend to imply that collecting compensation data from banks is the responsibility of NCUA but point out the lack of available data that would allow a direct comparison of credit union and bank executive compensation. 20. NCUA indicated that comparing executive compensation of federal credit unions and banks was not a stated objective for our study and that attempting to make a direct comparison is impossible, given the differences in the forms of compensation available to federal credit unions versus bank executives. We acknowledge that comparing executive compensation of federal credit unions and banks was not a stated objective for this study. Our report text merely points out that due to the lack of consistent, available, and transparent compensation data for credit unions, any overall comparison is difficult. For this reason, we did not provide bank executive compensation data in the main body of the report or make any direct comparisons between credit union and bank executive compensation. However, we believe that inclusion of bank executive compensation data in the appendix provides a useful benchmark on selected executive positions. 21. NCUA noted that the report neither discusses nor includes any data regarding the compensation paid to directors of banks and that including such data and discussion would make a more thorough and accurate comparison of executive compensation. We acknowledge this point and added some additional discussion on bank director compensation for context. 22. Our original characterization of NCUA’s Member Service Assessment Pilot was based on a discussion with NCUA officials. We have revised the text of the report to reflect that compensation data that NCUA obtained can be stratified into two statistically valid subsets based on the asset size of the credit unions surveyed. In addition to the above contact, Harry Medina, Assistant Director; Janet Fong; May Lee; John Lord; Donald Marples; Edward Nannenhorn; Jasminee Persaud; Carl Ramirez; Barbara Roesmann; Paul Thompson; and Richard Vagnoni made key contributions to this report. | Legislative and regulatory changes have blurred distinctions between credit unions and other depository institutions and raised questions about the tax-exempt status of credit unions. This report (1) assesses the effect of the Credit Union Membership Access Act on credit union membership and charters, (2) reviews the National Credit Union Administration's (NCUA) efforts to expand services to low- and moderate-income individuals, (3) compares rates offered by credit unions with comparably sized banks, (4) discusses unrelated business income tax issues, and (5) assesses transparency of credit union senior executive compensation. To address our objectives, we obtained NCUA data on credit union membership, charter changes, efforts to target those of modest means, and executive disclosure requirements. We also analyzed Federal Reserve Board's Survey of Consumer Finances and Internal Revenue Service data. Since the passage of the Credit Union Membership Access Act (CUMAA) in 1998, larger community-based credit unions have constituted a much greater proportion of the industry. NCUA has approved federal community charters with increasingly larger geographic areas and potential for economically diverse membership. Much of the shift toward the larger community-based credit unions was due to conversions from other charters. NCUA's approval of these charters appears to have been triggered by changes in the economic environment and financial services industry and to diversify membership to accomplish goals such as increasing service to those of modest means. NCUA has established the low-income credit union program and allowed adoption of "underserved areas" to increase credit union services to individuals of modest means. Despite increased credit union participation in these programs and the expansion of community charters, the 2004 and 2001 Survey of Consumer Finances indicated that credit unions lagged behind banks in serving low- and moderate-income households. NCUA officials told GAO that, given the nascent nature of its two initiatives and the relatively recent shift to community charters, they did not yet expect observable changes in the data. Also, NCUA recently has undertaken a pilot effort to collect data on the income characteristics of credit union members. Because limited data exist on the extent to which credit unions serve those of modest means, any assessment would be enhanced if NCUA were to move beyond its pilot and systematically collect income data. Based on GAO analysis, credit unions typically had more favorable rates than banks, particularly for consumer loans. For example, credit unions auto loans were 1 to 2 percentage points lower than similarly sized banks, on average. However, it was not clear the extent that the more favorable rates fully reflected the tax subsidy that credit unions receive by tax-exemption. The Internal Revenue Service (IRS) has been reviewing state-chartered credit union activities (federal credit unions are exempt) to determine compliance with unrelated business income tax (UBIT) requirements, but such determinations are difficult due to complicated criteria and because many credit unions file group rather than individual returns. IRS stated that it plans to issue technical guidance in the first quarter of 2007 that the agency believes will help ensure credit union compliance with UBIT. Finally, credit union executive compensation is not transparent. Federal credit unions, unlike other tax-exempt organizations, do not file information returns, which contain data on executive compensation, with IRS. NCUA is collecting compensation data as part of its pilot, but it is unclear whether NCUA will conduct future reviews. NCUA officials noted a number of alternatives that could be used to increase transparency, such as requiring federal credit unions to provide compensation information in call reports or require that credit unions disclose compensation data at annual meetings. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
FCC was established by the Communications Act of 1934 (Communications Act). The Communications Act, as amended, specifies that FCC was established for the ‘‘the purpose of regulating interstate and foreign commerce in communications by wire and radio so as to make available, so far as possible, to all the people of the United States . . . a rapid, efficient, Nation-wide, and world-wide wire and radio communication service with adequate facilities at reasonable charges, for the purpose of the national defense for the purpose of promoting safety of life and property through the use of wire and radio communication.” FCC is responsible for, among other things, making available rapid, efficient, nationwide, and worldwide wire and radio communication services at reasonable charges and on a nondiscriminatory basis, and more recently, promoting competition and reducing regulation of the telecommunications industry in order to secure lower prices and high-quality services for consumers. FCC established six strategic goals to support its mission: 1. Promote access to robust and reliable broadband products and services for all Americans. 2. Promote a competitive framework for communications services that support the nation’s economy. 3. Facilitate efficient and effective use of nonfederal spectrum to promote growth and rapid development of innovative and efficient communications technologies and services. 4. Develop media regulations that promote competition, diversity, and localism and facilitate the transition to digital modes of delivery. 5. Promote access to effective communications during emergencies and crises and strengthen measures for protecting the nation’s critical communications infrastructure. 6. Strive to be a highly productive, adaptive, and innovative organization that maximizes the benefit to stakeholders, staff, and management from effective systems, processes, resources, and organizational culture. FCC’s basic structure is prescribed by statute. FCC is composed of five commissioners, appointed by the President and approved by the Senate to serve 5-year terms; the President designates one member to serve as chairman. No more than three commissioners may come from any one political party. The commission has flexibility in how it creates and organizes divisions or bureaus responsible for specific work assigned. Specifically, the Communications Act, as amended, requires the commission to organize its staff into (1) integrated bureaus, to function on the basis of the commission’s principal workload operations, and (2) such other divisional organizations as the commission deems necessary. FCC currently consists of seven bureaus that are responsible for a variety of issues that affect consumers and the telecommunications industry, including analyzing complaints, licensing, and spectrum auctions, and 10 offices that provide support services for the bureaus and commission. Appendix II has a detailed description of each bureau and office. Each bureau is required by statute to include the legal, engineering, accounting, administrative, clerical, and other personnel that the commission determines necessary to perform its functions. FCC has identified attorneys, engineers, and economists as the agency’s main professional categories. Although FCC has staff offices with concentrations of each profession (attorneys in the Office of General Counsel, engineers in the Office of Engineering and Technology, and economists in the Office of Strategic Planning and Policy Analysis), these professions are also integrated into the bureaus. Under the Communications Act, as amended, FCC has broad authority to execute its functions. The act, as amended, is divided into titles and sections that describe various powers and concerns of the commission, with different titles describing the laws applicable to different services. For example, there are separate titles outlining the specific provisions for telecommunications services and for cable services. This statutory structure created distinct regulatory “silos” that equated specific services with specific network technologies. However, technological advances in communications infrastructure have led to a convergence of previously separate networks used to transmit voice, data, and video communications. For example, telephone, cable, and wireless companies are increasingly offering voice, data, and video services over a single platform. FCC is charged with carrying out various activities, including issuing licenses for broadcast television and radio; overseeing licensing, enforcement, and regulatory functions of cellular phones and other personal communication services; regulating the use of radio spectrum and conducting auctions of licenses for spectrum; investigating complaints and taking enforcement actions if it finds that there have been violations of the various communications laws and commission rules that are designed to protect consumers; addressing public safety, homeland security, emergency management, and preparedness; educating and informing consumers about communications goods and services; and reviewing mergers of companies holding FCC-issued licenses. The Telecommunications Act also expanded FCC’s responsibilities for universal service beyond the traditional provision of affordable, nationwide access to basic telephone service to include eligible schools, libraries, and rural health care providers. Two major laws that affect FCC’s decision-making process are the Government in the Sunshine Act of 1976 (Sunshine Act) and the Administrative Procedure Act of 1946. Government in the Sunshine Act of 1976: The Sunshine Act applies to agencies headed by collegial bodies. Under the Sunshine Act, FCC is required to provide sufficient public notice that the meeting of commissioners will take place. The agency generally must also release the meeting’s agenda, known as the Sunshine Agenda, no later than 1 week before the meeting. In addition, the Sunshine Act prohibits more than two of the five FCC commissioners from deliberating with one another to conduct agency business outside the context of the public meeting. Administrative Procedure Act of 1946: The Administrative Procedure Act (APA) is the principal law governing how agencies make rules. The law prescribes uniform standards for rulemaking, requires agencies to inform the public about their rules and proposed changes, and provides opportunities for public participation in the rulemaking process. Most federal rules are promulgated using the APA-established informal rulemaking process, which requires agencies to provide public notice of proposed rule changes, as well as provide a period for interested parties to comment on the notices—hence the “notice and comment” label. The notice and comment procedures of the APA are intended to encourage public participation in the administrative process, to help educate the agency, and thus, to produce more informed agency decision making. Experts have noted that public participation promotes legitimacy by creating a sense of fairness in rulemaking, and transparency helps both the public and other branches of government to assess whether agency decisions are in fact being made on the grounds asserted for them and not on other, potentially improper, grounds. APA does not generally address time frames for informal rulemaking actions, limits on contacts between agency officials and stakeholders, or requirements for “closing” dockets. FCC implements its policy initiatives through a process known as rulemaking, which is the governmentwide process for creating rules or regulations that implement, interpret, or prescribe law or policy. When developing, modifying, or deleting a rule, FCC relies on public input provided during the rulemaking process. Before beginning the rulemaking process, FCC may issue an optional Notice of Inquiry (NOI) to gather facts and information on a particular subject or issue to determine if further action by the FCC is warranted. Typically, an NOI asks questions about a given topic and seeks comments from stakeholders on that topic. If FCC issues an NOI, it must issue a Notice of Proposed Rulemaking (NPRM) before taking final action on a rule, unless an exception to notice and comment rulemaking requirements applies. FCC issues NPRMs to propose new rules or to change existing rules, and the issuance of an NPRM signals the beginning of the rulemaking process. The NPRM provides an opportunity for the stakeholders to submit their comments on the proposal and to reply to the comments submitted by other stakeholders. A summary of the NPRM is published in the Federal Register and announces the deadlines for filing public comments and reply comments. The NPRM also indicates the rules for ex parte communications between agency decision makers and other persons during the proceeding. An ex parte presentation discusses the merits or outcome of a proceeding, and if written, is not served on all the parties to a proceeding, and if it is oral, it is made without advance notice to the parties or an opportunity for them to be present. FCC generally classifies its rulemaking proceedings as “permit-but-disclose” proceedings, in which ex parte presentations to FCC officials are permissible but subject to certain disclosure requirements. Generally, the external party must provide two copies of written presentations to be filed in the public record. If an external party makes an oral ex parte presentation that presents data or arguments not already reflected in the party’s written comments or other filings in that proceeding, then the external party must provide FCC’s Secretary with an original and one copy of a summary of the new data or arguments to be filed in the public record. Once FCC places an item on the Sunshine Agenda, which lists the items up for a vote at the next open commission meeting, ex parte contacts are restricted, with several exemptions. In addition, FCC provides the stakeholders the ability to submit electronic comments via the FCC Web site. After reviewing the comments received in response to an NPRM, the FCC may issue a Further Notice of Proposed Rulemaking (FNPRM) seeking additional public comment on specific issues in the proceeding. Following the close of the reply and comment period, FCC officials may continue discussing the issue with external parties through ex parte presentations. Staff in the bureaus assigned to work on the order begin developing and analyzing the public record and the information provided in ex parte contacts to propose an action for the commission to vote on, such as adopting final rules, amending existing rules, or stating that there will be no changes. The chairman decides when the commission will vote on final rules and whether the vote will occur during a public meeting or by circulation, which involves electronically circulating written items to each of the commissioners for approval. See figure 1 for an illustration of the steps in FCC’s rulemaking process. Although FCC has established some function-based bureaus and reorganized its bureaus to reflect some changes in the telecommunications market, further evolutions and the growth of new technologies have continued to create crosscutting issues that span several bureaus. FCC’s bureaus are still somewhat structured along the traditional technology lines of wireless, wireline, satellite, and media, despite the fact that one company may provide services that span such distinctions or that competing services may be regulated by different bureaus. Since the Telecommunications Act, chairmen have made changes to FCC’s bureau structure. In 1999 Chairman Kennard issued A New FCC for the 21st Century, which called for reorganizing FCC’s bureau structure along functional, rather than technological, lines in order to carry out FCC’s core responsibilities more productively and efficiently. Subsequently, FCC consolidated enforcement functions and personnel from the Common Carrier, Mass Media, Wireless Telecommunications, and Compliance and Information Bureaus into a new Enforcement Bureau. In addition, FCC consolidated consumer complaint and public information functions of the various bureaus into a Consumer Information Bureau. Chairman Powell also issued a reorganization plan to promote a more efficient, responsive, and effective organizational structure. This reform and reorganization plan included creating three new bureaus and one new office. FCC consolidated the Mass Media Bureau and Cable Services Bureau into a new overarching Media Bureau, and restructured the Common Carrier Bureau and renamed it the Wireline Competition Bureau. Additionally, the Consumer Information Bureau was given increased policy making and intergovernmental affairs responsibilities and was renamed the Consumer and Governmental Affairs Bureau. Finally, the Office of Strategic Planning and Policy Analysis subsumed the Office of Plans and Policy. In 2006, under Chairman Martin, FCC established the Public Safety and Homeland Security Bureau, consolidating existing public safety and homeland security functions and issues from the Enforcement, Wireless Telecommunications, Wireline Competition, and Media Bureaus, and the offices of Engineering and Technology, Strategic Planning and Policy Analysis, and Managing Director. Figure 2 shows FCC’s current structure and how the bureaus and offices have changed since the Telecommunications Act. Despite these changes in FCC’s organizational structure, the changing telecommunications market and the development of new technologies have created new issues that span several bureaus. For example, broadband services—which became available in the late 1990s—do not fall exclusively within the jurisdiction of a particular FCC bureau or regulatory category. As a result, FCC created broadband regulations in a piecemeal fashion, issuing four separate orders (one for cable modems, one for facilities-based wireline broadband Internet access, one for broadband over power line, and one for wireless broadband Internet access) to regulate competing methods of providing broadband services by the same standard. The Telecommunications Act allows FCC to classify services as telecommunications services or information services, the latter being subject to fewer regulatory restrictions. In 2002, FCC determined that cable modem service should be categorized as an information service. Three years after FCC issued the cable modem order and shortly after the Supreme Court upheld FCC’s regulatory classification for cable modem service, FCC adopted an order that granted providers of facilities-based wireline broadband Internet access the same regulatory classification and treatment as cable modem Internet access providers. In November 2006, FCC issued an order classifying broadband over power line-enabled Internet access service as an information service. In March 2007, FCC issued an order classifying wireless broadband Internet access as an information service. In addition, as companies that once provided a distinct service (such as cable and telephone companies) have shifted to providing bundles of services (voice, video, and data services) over a broadband platform, new debates have arisen regarding how rules previously intended for a specific industry and service (such as universal service, customer retention rules, and video franchising rules) should be applied to companies now providing multiple services. FCC officials told us they are currently looking across the agency to identify challenges that convergence poses to the existing structure and will first focus on how FCC’s systems, such as its data collection efforts, can be modified to address these challenges, but they may consider structural changes later. According to agency officials, FCC uses informal interbureau collaboration, working groups, and task forces to address convergence and crosscutting issues, but FCC lacks written policies outlining how interbureau coordination and collaboration is to occur. FCC handles convergence by holding interbureau meetings to discuss the progress of items and to address upcoming issues. When a crosscutting item requires the input of multiple bureaus or offices, one is considered the “lead” and is responsible for coordinating with all other bureaus or offices that have a direct concern or interest in the document and ensuring they have the opportunity to review and comment on an agenda item prior to submission to the commission. Generally, if a proceeding (such as a petition or draft order) clearly falls under a specific bureau’s purview, that bureau will serve as the lead on the issue. The determination of the lead bureau is made by each bureau’s management or by the precedence of which bureau handled a particular issue in the past. For example, the Wireless Telecommunications Bureau would be the lead for items regarding licensed spectrum rules because it has handled these issues in the past. FCC officials told us that on more complex issues, or items that do not have an evident lead bureau, the chairman is ultimately responsible for selecting the lead bureau. Although FCC relies on this interbureau coordination, it does not provide specific steps or guidance regarding how or when this coordination is to occur, with some limited exceptions. FCC officials confirmed that there are no written policies outlining how the bureaus should coordinate with one another. FCC’s lack of written policies and its reliance on informal interbureau coordination to address issues that span beyond the purview of a single bureau can result in inefficiencies. For example, one FCC official told us that while FCC was conducting a merger review of two major media companies, the review process was delayed because of confusion regarding which bureau was responsible. Since each of the companies merging had assets regulated by different FCC bureaus, it was unclear which bureau was the designated lead and would be responsible for a specific portion of the merger review process. Although the chairman eventually designated a lead bureau, the time it took for this to happen slowed down the process, and the overall lack of coordination made the process less efficient. Our Internal Control and Management Evaluation Tool emphasizes the importance of internal communications, specifically noting the need for mechanisms that allow for the easy flow of information down, across, and up the organization, including communications between functional activities. In addition, the absence of written policies allows interbureau collaboration and communication to vary from chairman to chairman. FCC officials noted significant differences between prior chairmen’s emphasis on bureau interaction. For example, former Chairman Kevin Martin required staff to seek approval from management before contacting other bureau and office staff. Current and former FCC officials told us that such policies limited interbureau collaboration and staff-to-staff communication. By contrast, then-Acting Chairman Copps instituted a weekly Chairman’s Office Briefing with bureau and office chiefs, or their designees, and a representative from each of the commissioners’ offices with the stated intent of promoting openness, a practice that continues under Chairman Genachowski. In addition, an FCC official told us that under Chairman Powell, FCC had a memorandum outlining how one bureau was to note its concurrence or disagreement with a draft order prepared by another bureau, but that the practice largely lapsed under Chairman Martin. The lack of written policies also allows the chairman complete discretion when assigning bureau staff to address an item, leading to instances where all relevant staff were not included in developing an item. For example, according to FCC officials, the Wireless Telecommunications Bureau was not included in drafting a universal service order that increased the portion of universal service funding provided by wireless customers. FCC officials told us the resulting order did not fairly characterize the wireless industry’s prior efforts, which led the industry to file reconsideration petitions that required additional time to address. Other officials told us that in 2008, FCC received filings in its Wireline Competition Bureau and its Enforcement Bureau regarding allegations that Comcast was discriminating against customers using peer-to-peer sharing protocols to exchange videos. FCC officials told us that then-Chairman Martin directed the Office of General Counsel to draft a resolution without coordinating or discussing the issue with the other bureaus and that this caused uncertainty in the Enforcement Bureau regarding how to address pending complaints. FCC officials and outside stakeholders stated that communication among bureaus is necessary for addressing convergence and other crosscutting issues under the current bureau structure. Three FCC officials told us that convergence in the telecommunications market requires FCC’s bureaus to actively communicate with one another so they can address issues that span multiple bureaus. One of these officials also noted that convergence makes active communication among bureaus even more important because if communication fails or does not take place, issues might inadvertently not be addressed before the information is presented to the commissioners and their staff. FCC’s functional offices, such as the Office of Engineering and Technology (OET) and the Office of Strategic Planning and Policy Analysis (OSP), provide a broader scope than the platform-based bureaus and address some of the issues posed by convergence, but the chairman’s influence can affect FCC’s ability to use these offices to address crosscutting issues. With regard to OET, stakeholders, including commissioners and trade associations, have raised concerns about whether the chairman’s authority over office staff impacts OET’s ability to provide independent expertise. Two commissioners told us that although OET had high-quality staff, the commissioners question whether the information OET provides is impartial, since all bureau and office chiefs report to the chairman. One of the commissioners emphasized that without reliable unbiased information, it can be difficult to make good decisions on scientific and technical questions. Additionally, three trade associations also expressed concern about the independent nature of OET, with one indicating that there is no way to tell if the information coming from OET is independent of the chairman or the best of several options. Similarly, the emphasis FCC places on OSP and the work it does varies according to the chairman, and in recent years, OSP’s output has diminished. OSP, working with the Office of Managing Director, is responsible for developing a strategic plan identifying short- and long-term policy objectives for the agency; working with the chairman to implement policy goals; and acting as expert consultants in areas of economic, business and market analysis, and other subjects that cut across traditional lines, such as the Internet. One former chief economist told us that each chairman has discretion over how he will use OSP, and therefore, the role of the office in providing economic analyses will depend on whether the chairman values economic research. Another former chief economist noted that FCC’s emphasis on economic analysis depends on the chairman’s preferences. OSP is responsible for producing publicly available work papers that monitor the state of the communications industry to identify trends, issues, and overall industry health. However, OSP did not release any working papers between September 2003 and February 2008 and has not released any working papers since issuing three in February 2008. Given OSP’s responsibility in developing a strategic plan that identifies short- and long-term policy objectives for the agency, a lack of research can put FCC at a distinct disadvantage in preparing for the future. To address these issues, some stakeholders we spoke with suggested that adding more resources to OSP or creating a separate economics bureau would allow for more independent and robust economic analysis. One former chief economist told us that although the research function of FCC is under OSP’s purview, OSP does not have the resources needed, and providing additional resources would help them produce more independent and higher-quality analyses. A former chairman expressed similar concerns about OSP’s resources. Two other former chief economists suggested that if economists were centralized in one group or office, then economic analysis would have greater influence in the decision-making process. Similarly, a researcher found that another independent regulatory agency’s use of an independent and centralized Bureau of Economics leads to routine use of cost-benefit analysis during its investigations and enforcement actions. Finally a trade association told us that OSP has always been on the periphery of the policy-making process because it lacks the budget and staff levels to complete comprehensive industry analysis, and that OSP needs additional resources to perform more useful policy analysis. While some stakeholders have suggested consolidating economists in a centralized bureau, others have noted the need to maintain economic expertise within the bureaus. Officials from each bureau we spoke with told us having economists imbedded in each bureau was useful because it allows the bureaus to access economic expertise more easily. For example, economists may lead teams on particular issues, review mergers, gather subscriber data, create economic development policies, manage industry reporting, and produce economic reports and information, and a bureau’s ability to function could suffer if the economists were taken out of the bureau. One study that examined organizational structures for attorneys and economists in enforcement agencies found that having economists and attorneys working together in the same division and organized around a particular industry or sector, as they do at FCC, is advantageous for a number of reasons. The study found the main advantage of this structure is that it focuses economic analysis on the questions of interest to the ultimate decision makers. Additionally, the strong links between economists and attorneys working in the same division help to ensure that economists are answering all the legally relevant questions and the decision makers can direct the efforts of economists to answer the questions that concern them. However, these arguments do not necessarily preclude the need to examine OSP’s role and determine whether it is able to address the economic implications of broad policy issues. Several stakeholders have proposed a variety of options for restructuring FCC. One proposal is to replace industry-based bureaus with bureaus divided along functional goals. Some stakeholders have expressed concerns that FCC’s current bureau structure may lead to bureaus identifying with the industry they regulate, rather than taking an overarching view of an issue. One trade group representative and a former FCC chairman stated that this leads to “fiefdoms,” where the staff members begin to act more like advocates for the industry they are regulating than as experts looking for the best decision. In addition, stakeholders stated that the culture of the bureaus may vary—depending on their history and the industry they regulate—and that this could create problems if competing services are treated differently based on which bureau is responsible for regulating the service. In response to such concerns, some stakeholders suggested that FCC create new functional bureaus that focus on areas that span a variety of service providers and industries, such as competition, licensing, and spectrum management. For example, one former FCC official suggested that FCC could create one bureau to handle spectrum management issues, which are currently divided among the Wireless Telecommunications Bureau, the Office of Engineering and Technology, the International Bureau, and the Public Safety and Homeland Security Bureau. Another stakeholder suggested FCC structure bureaus along overarching policy goals, such as culture and values (which would include broad issues such as obscenity, advertising rules, and public broadcasting) and markets (which would include allocation of spectrum, competition, and market analysis). The stakeholder stated that by reorganizing along such lines, FCC would create departments with technology and industry-neutral responsibilities for key social mandates, which would better enable FCC to address issues that span industry lines. However, a number of stakeholders and FCC officials expressed caution when discussing restructuring or reforming the bureaus. Restructuring is often resource-intensive and disruptive for an agency and can impact staff morale. In addition, it is unclear whether restructuring the bureaus would improve FCC’s ability to regulate these industries, since the Communications Act, as amended, establishes different regulatory regimes based on how a service is provided. Some industry and FCC stakeholders we interviewed also noted that in some cases, the current bureau structure works well, such as when issues fall within a specific bureau’s purview. For example, one FCC official noted that in some cases, it is useful to have various functions housed in a specific industry-based bureau, explaining that since rulemaking and licensing functions are housed in the Wireless Telecommunications Bureau, bureau staff understand the implications of administering the licensing rules made during the rulemaking process. Similarly, FCC officials stated that the industry-based bureaus allow staff to develop in-depth expertise on an issue. For example, an FCC official stated that the Media Bureau’s video division staff understand how to address most broadcast licensing and market issues and that splitting up the staff could result in a loss of group cohesion and institutional knowledge. Regardless of the organizational structure FCC decides to pursue, it is certain that technological advances and marketplace changes will contribute to an evolving regulatory landscape for the commission. To anticipate and quickly respond to these changing conditions, FCC will need mechanisms to ensure that staff can routinely and reliably coordinate and communicate across bureaus in order to foster and harness FCC’s collective knowledge on issues that span the bureaus. The absence of written policies outlining how bureaus should communicate and collaborate on crosscutting issues has led to inefficiencies in FCC’s decision-making process by leaving the extent to which interbureau collaboration occurs subject to the preferences of the chairman. FCC chairmen have varied in their policies regarding commissioner access to bureau and office analyses during the decision-making process. For example, then-Acting Chairman Copps publicly stated that commissioners would have unfettered access to the bureaus, adding that bureaus should respond to requests from commissioners’ offices directly and as quickly as possible, without preapproval from the chairman’s office. In addition, former Chairman Kennard established internal procedures outlining how commissioners should receive information from bureaus and offices during the decision-making process. These procedures specified that bureau and office chiefs would provide detailed oral briefings or memoranda on upcoming items upon the request of commissioners and would solicit feedback from commissioners while developing draft items. Under Chairman Martin, there was a perception among some FCC commissioners and staff that the commissioners could not easily access bureau and office analyses. Stakeholders also told us that some previous chairmen had similarly limited commissioner access to bureau and office analyses. One rationale behind such policies was that giving the commissioners unrestricted access to agency staff could hinder the decision-making process by allowing commissioners to search for support among the bureau staff for any given position. Similarly, some stakeholders expressed concerns about providing commissioners full access to bureau staff. For example, one FCC official recounted prior instances in which commissioners requested information that placed bureau staff in the middle of commission-level policy disputes, and a former FCC official expressed concerns about commissioners making requests that could tie up bureau resources. No explicit statutory or regulatory language exists that outlines commissioners’ access to internal information. The Communications Act, as amended, states that it is the duty of the chairman to coordinate and organize the work of the commission in such a manner as to promote prompt and efficient disposition of all matters within the jurisdiction of the commission. In implementing this, FCC’s chairman sets the agency’s agenda by directing the work of the bureaus and offices to include drafting agenda items for commission consideration. While FCC’s Agenda Handbook does specify that the bureaus and offices should provide commissioners copies of draft items for consideration and editing 3 weeks before the commission votes on the item at a public meeting, it does not specify the extent to which commissioners have access to the bureau and office staff and their analyses, including their ability to ask the staff questions about draft items or the analyses supporting those items. The absence of internal policies or statutory requirements has enabled each chairman to define how and when other commissioners receive bureau and office analyses during the decision-making process. Controlling commissioner access to staff analysis and opinions may subvert the commission decision-making process and raises concerns among FCC officials and external stakeholders regarding the transparency and informed nature of the decision-making process. Many stakeholders we interviewed, including former FCC officials and current FCC commissioners and bureau officials, noted the importance of bureau analyses to the commission’s decision-making process, with some stating that commissioners’ lack of access to bureau analyses can negatively impact the quality of FCC’s decisions. Two bureau officials explained that providing commissioners access to information improves FCC’s decisions by allowing for more informed deliberations. FCC officials also told us that in situations where commissioners are unable to access information from the bureaus and offices, commissioners may refuse to vote on an item, thereby delaying decision making. The ability of the chairman to exert control over the bureau and office analyses provided to commissioners has raised concerns as to whether the information provided reflects the bureaus’ and offices’ independent analyses or the chairman’s position on an issue. In addition, a current and a former commissioner stated that the chairman’s ability to influence what information FCC staff provided to commissioners increased the commissioners’ reliance on outside sources of information. The former commissioner noted that this raises concerns about the quality of information the commissioners may rely on and the transparency of the decision-making process, since private groups may be providing data that supports a particular agenda. Regulatory bodies headed by multimember commissions, such as FCC, are often advocated and preferred over a department or agency headed by a single administrator because group decision making under conditions of relative independence is preferable to dominance by a single individual. For example, a major review of independent regulatory agencies concluded that a distinctive attribute of commission action is that it requires concurrence by a majority of members of equal standing after full discussion and deliberation, and that collective decision making is advantageous where the problems are complex, the relative weight of various factors affecting policy is not clear, and the choices are numerous. Another study promoted the use of the commission structure for FCC in particular, stressing that the commission prevents a single administrator from having undue influence over the sources of public information. We have also recognized the need to provide decision makers with the information needed to carry out their responsibilities. Our internal control standards state that information should be recorded and communicated to management and others within the entity who need it and in a form and within a time frame that enables them to carry out their responsibilities. We also reviewed the policies of other independent regulatory agencies with regard to commissioner access to staff analyses. Officials at the Federal Energy Regulatory Commission and the Federal Trade Commission (FTC) told us that they do not have formal policies ensuring commissioner access to information, but stated that commissioners have not experienced problems obtaining information in the past. For example, an FTC official told us that that the commission has had a long-standing practice that the commissioners have access to all of the information needed to perform their duties. However, the Nuclear Regulatory Commission (NRC) is statutorily required to ensure that commissioners have full access to the information needed to perform their duties and that commissioners share “equal responsibility and authority in all decisions and actions of the commission.” In implementing this policy, NRC has developed and made publicly available its decision-making procedures, including commissioners’ rights to information. These procedures outline the responsibilities of the chairman and the commissioners, how commissioners receive items from commission staff, and how items are voted on. Some of the key ways in which NRC’s procedures provide commissioners access to information include: Requiring that draft and final analyses by NRC staff are simultaneously provided to all commissioners, including the chairman. Establishing that each commissioner, including the chairman, has equal responsibility and authority in all commission decisions and actions, and has full and equal access to all agency information pertaining to commission responsibilities. Balancing commissioner access to staff analyses with the ability of the chairman to direct resource expenditures. For example, although individual commissioners can request information or analyses from NRC staff, if the request requires significant resources to fulfill and questions of priority arise, the office or the commissioner can request the chairman resolve the matter. If the chairman’s decision is not satisfactory to the requesting commissioner or the office, either can bring the matter for a vote before the full commission. NRC officials told us that these long-standing internal procedures, which are reviewed approximately every 2 years, have been helpful in avoiding protracted disputes over the prerogatives and responsibilities of the chairman and the other commissioners and ensuring that access issues are handled consistently. When issuing an NPRM to gather public input before adopting, modifying, or deleting a rule, FCC rarely includes the text of the proposed rule in the notice, which may limit the effectiveness of the public comment process. A 2008 FCC draft order noted that during the period 1990 through 2007, the commission issued approximately 3,408 NPRMs, 390 (or 11.4 percent) of which contained the text of proposed rules under consideration. According to A Guide to Federal Agency Rulemaking, a resource guide created by the Administrative Law and Regulatory Practice and Government and Public Sector Lawyers Division of the American Bar Association, “most agencies publish the text of the proposed rule when commencing rulemaking, and some enabling statutes expressly require that the agency do so.” Widespread concern exists regarding the lack of details provided in FCC’s NPRMs, which generally ask for comment on wide-ranging issues, making the NPRM more like a Notice of Inquiry (NOI). FCC officials told us that FCC uses NPRMs rather than NOIs (the traditional method of gathering broad input on a topic) so that it can proceed directly to issuing a rule once one is developed. By contrast, if FCC used an NOI to gather information, then it would need to issue an NPRM before issuing a rule. Several stakeholders have stated that such broad NPRMs limit their ability to submit meaningful comments that address FCC’s information needs and increase FCC’s reliance on information provided in ex parte contacts. For example, the Small Business Administration’s (SBA) Office of Advocacy noted its concerns about FCC’s use of NPRMs instead of NOIs to collect broad information on a number of issues. It argues that by issuing an NPRM that lacks specific proposals, the FCC creates uncertainty in the industry, resulting in thousands of comments that can only speculate as to what action the FCC may take and the potential impacts. SBA’s Office of Advocacy adds that small businesses, in particular, are often overwhelmed by the scope of a vague NPRM and cannot contribute meaningfully to the rulemaking process. In addition, part of the value of the public comment process is derived from external stakeholders’ ability to respond to other groups’ comments, thereby improving the public debate on an item. However, if parties are unsure of FCC’s intentions due to a lack of specificity in the NPRM and they submit general comments or wait until the ex parte process to provide input on an item, public debate can be limited. The APA requires that an NPRM include “either the terms or substance of a proposed rule or a description of the subjects and issues involved.” Since the public is generally entitled to submit their views and relevant data on any proposals, the notice must be sufficient to fairly apprise interested parties of the issues involved, but it need not specify every precise proposal which the agency may ultimately adopt as a rule. APA’s requirements are satisfied when the rule is a “logical outgrowth” of the actions proposed, which means that interested parties “should have anticipated the agency’s final course in light of the initial notice.” Although APA does not specifically require that NPRMs contain proposed rule text, some studies of federal rulemaking have identified the benefits of providing proposed rule text for public comment. For example, A Guide to Federal Agency Rulemaking notes that “specific proposals help focus public comment, and that, in turn, assists reviewing courts in deciding whether interested persons were given a meaningful opportunity to participate in the rulemaking … a focused and well-explained NPRM can educate the public and generate more helpful information from interested persons.” Similarly, in its analyses of transparent governing and public participation in the rulemaking process, ICF International recommended that agencies garner more substantive public comments by issuing an Advanced Notice of Proposed Rulemaking that lays out specific options under consideration and asks specific questions that are linked to a Web form. FCC’s current ex parte process can lead to vague or last-minute ex pa summaries of meetings between FCC and external officials. The APA places no restriction on ex parte communication between agency decision makers and other persons during informal rulemaking. However, FCC ha rules about such contacts that are intended to protect the fairness o proceedings by providing an assurance that FCC decisions are not influenced by off-the-record communications between decision makers and others. Stakeholders must provide FCC with two copies of written ex parte presentations and the original and a copy of a summary of the new information provided during oral ex parte contacts to be filed in the public record. FCC places the burden of preparing and ensuring that an ex parte summary is complete on the external party. FCC’s ex parte rules provide general guidance on what is sufficient, stating that the summaries should generally be “more than a one or two sentence description” and not just a listing of the subjects discussed. When it is unclear whether da ta or arguments presented in an ex parte contact are already in the public record, FCC advises that parties briefly summarize the matters disc at the meeting. FCC officials told us that they are in the pr reviewing and potentially changing the ex parte process. However, stakeholders expressed concerns about the submission of vague ex parte summaries under the current process. For example, an ex parte summary may simply state that an outside party met with FCC offici als to share its thoughts on a proceeding. Stakeholders told us that vague ex parte summaries reduce transparency and public discourse in FCC’s decision-making process by limiting stakeholders’ ability to determine what information was provided in the meeting and to discuss or rebut tha information. In 2002, an FCC commissioner stated that she believed that the “cursory filings that routinely permits” are an apparent violation of its rules requiring more than a one or two sentence description. Similarly, a former acting chairman noted the need to “enhance, or at least enforce,” FCC’s ex parte rules so that the public will find more than a brief ex parte letter that only identifies who attended a meeting, rather than what was said in the meeting. According to FCC, the ex parte process is an important avenue for FCC in ess. collecting and examining information during the decision-making proc FCC has previously told us that it generally does not produce its own studies to develop a rule. Rather, FCC relies on stakeholders to submit information and analysis that is then placed in the docket so that FC other stakeholders can critique the information. According to FCC officials, this results in both transparency and quality information because each stakeholder has had an opportunity to review and comment on all of the information in the docket. In addition, according to an official in FCC’s Office of General Counsel, ex parte meetings allow stakeholders and F to focus on specific issues of interest to FCC and to identify potentia weaknesses in the existing arguments. An official in FCC’s Office of General Counsel recognizes concerns that some ex parte summaries are cursory and vague and noted that to address this, FCC periodically sends reminders to commenters regarding the information required in ex pa rte summaries and has placed additional information about the required information on FCC’s Web site. In 2000, FCC issued a public notice reiterating the public’s responsibilities in the ex parte process. This notic stated “the duty to ensure the adequacy of ex parte notices …rests with the person making the presentation. Staff members have the disc request supplemental filings if they feel that the original filing is retion to inadequate, but the obligation to file a sufficient notice must be satisfied regardless of possible requests by the staff.” FCC does not proactively determine whether the content of the summaries is sufficient. Specifically, FCC relies on a complaint-driven process to ensure that ex parte submissions comply with FCC’s rules. FCC’s Office of General Counsel reviews ex parte communications if it receives a complaint. However, since the parties not present at the meeting are generally unsure as to what occurred, it is difficult for external stakeholders to determine whether an ex parte submission is sufficiently detailed. In addition, it can be difficult to determine if an ex parte summary is sufficient, because if a party is simply restating information it has already presented, then it can file a short ex parte summary or none at all. After the Office of General Counsel receives a complaint, it provides copies to the party referred to in the complaint and to the FCC staff present during the meeting, and the parties provide a written response to the office about their version of events. The Office of General Counsel is responsible for determining whether the issue has been appropriately resolved. FCC receives, on average, one complaint a month about ex parte communications. Other aspects of the ex parte process can challenge stakeholders’ ability to submit information during FCC’s decision-making process. For example, one group noted that unlike public comments, which must be submitted by a specific deadline, the ex parte process does not have a definitive end date and groups must expend their resources tracking ex parte submissions until the relevant item is voted on by the commission. In addition, stakeholders must attempt to determine what information was provided based on summaries of the ex parte meeting and submit written responses or attempt to meet with FCC officials to offer a countervailing viewpoint. This can present a particular burden for stakeholders with limited resources for tracking and responding to ex parte contacts. For example, two organizations told us that it is more difficult for groups that must travel to Washington, D.C., to participate in person at ex parte meetings than for groups with a presence inside Washington. One organization told us of instances in which FCC canceled meetings with them at the last minute, after the group traveled from outside of Washington, D.C, to meet with FCC. Several stakeholders also raised concerns regarding prior incidents in which parties made substantive ex parte submissions just before or during the Sunshine period, during which external contact with FCC officials is restricted, and thus, other groups are unable to respond to the information provided. Although, subject to certain exceptions, external parties are forbidden from contacting FCC officials after release of the Sunshine Notice (generally 1 week prior to a vote), FCC officials are allowed to initiate contact with external parties for additional information on an agenda item. This can lead to ex parte submissions affecting decisions without allowing for public comment on the information provided. For example, during the AT&T BellSouth merger review, an ex parte communication occurred the day before the scheduled vote. During the communication, FCC proposed merger conditions and the ex parte summary was filed the day of the proposed vote, thus preventing public comment and expert review. However, in response to complaints from the other commissioners, Chairman Martin delayed the merger vote to allow for public comment on the new changes. An official in FCC’s Office of General Counsel told us that there are legitimate concerns about stakeholders’ ability to respond to ex parte presentations made during the Sunshine period, pursuant to a Sunshine period exception, but added that if this occurs, stakeholders can request to be invited by FCC officials to file a counter ex parte communication. Finally, although parties are required to file a summary of ex parte contacts with FCC’s Secretary, all commissioners may not receive a copy of this summary. For example, if a paper copy is filed shortly before a scheduled vote, there may not be adequate time for the summary to be scanned and placed in the public record. FCC officials told us that there is currently no mechanism for notifying commissioners that ex parte summaries have been filed and added that commissioners rely on the public record to identify this information. Other federal agencies have implemented different guidelines for the ex parte process. For example, the Department of Transportation (DOT) issued an order and accompanying procedures, noting the importance of providing interested members of the public adequate knowledge of contacts between agency decision makers and the public during the rulemaking process. DOT establishes that if such contact occurs prior to the issuance of an NPRM and is one of the bases for the issuance of the NPRM, the contact should be discussed in the preamble of the notice. In addition, although DOT recommends holding such contact to a minimum after the close of the reply comment period, noting that contacts occurring at this stage of the process tend to be hidden, DOT states that if such contacts do occur, the meeting should be announced publicly or all persons who have expressed interest in the rulemaking should be invited to participate. In addition, DOT requires that records of such contacts be promptly filed in the public record and states that while a verbatim transcript is not required, a mere recitation that the listed participants met to discuss a named general subject on a specified day is inadequate. Rather, DOT notes that such records should include a list of the participants, a summary of the discussion, and a specific statement of any commitments made by department personnel. Officials from FTC told us that the agency personnel are responsible for submitting ex parte communications in writing to the FTC Secretary so that they can be placed on the public record. NRC officials told us that if comments submitted after the public comment period raise a significant new idea, NRC would place those comments in the record and might reopen the comment period to get reactions to the submission. NRC officials also noted that when NRC issues a request for public comments, comments received after the due date will be considered if it is practical to do so, and that NRC does reopen or extend a comment period to give people more time to consider complex issues. Stakeholders concerned about FCC’s current ex parte process have suggested a number of changes. Some of the suggestions included enhancing FCC’s guidelines regarding ex parte summaries to include requiring that FCC officials reject incomplete ex parte summaries or requiring them to certify that the ex parte summaries they receive accurately capture the substance of the information provided in meetings, improving FCC’s enforcement of its ex parte requirements, and limiting FCC’s use of last-minute ex parte contacts to inform its decisions. An FCC official noted that one possible solution to ex parte submissions made during the Sunshine period would be to create an automatic right to respond for other stakeholders, but added that allowing for more contact during the Sunshine period would run counter to the idea of establishing a quiet period for the commissioners to consider an issue before voting. FCC is currently in the process of considering possible revisions to its ex parte policies and is exploring new methods of collecting public comment. One method under consideration includes collecting comments through its www.broadband.gov Web site, which allows members of the public to comment on a blog, request ex parte meetings, and obtain information about upcoming workshops. On October 28, 2009, FCC held a workshop on improving disclosure of ex parte contacts, during which participants discussed possible revisions to FCC’s current ex parte rules and processes. Some academic and industry stakeholders have voiced concerns that FCC’s merger review process allows the agency to implement policy decisions without going through the rulemaking process. Companies holding licenses issued by FCC and wishing to merge must obtain approval from two federal agencies: the Department of Justice (DOJ) and FCC, which do not follow the same standards when reviewing mergers. While DOJ is charged with evaluating mergers through an antitrust lens, FCC examines proposed mergers under its Communications Act authority to grant license transfers. The act permits the commission to grant the transfer only if the agency determines that the transaction would be in the “public interest, convenience, and necessity.” A recent Congressional Research Service report noted that the public interest standard is generally considered broader than the competition analysis authorized by the antitrust laws and conducted by DOJ. The report concludes that the commission possesses greater latitude to examine other potential effects of a proposed merger beyond its possible effect on competition in the relevant market. In addition, FCC negotiates and enforces voluntary conditions on license transfers under the authority provided by §303(r) of the Communications Act, which grants the commission the authority to “prescribe such restrictions and conditions, not inconsistent with the law, as may be necessary to carry out the provisions” of the act, and §214(c), which grants the commission the power to place “such terms and conditions as in its judgment the public convenience and necessity may require.” Several stakeholders told us that FCC has used its merger review authority to get agreements from merging parties on issues that affect the entire industry and should be handled via rulemaking, rather than fashioning merger-specific remedies. Stakeholders argue that this may lead to one set of rules for the merged parties and another set of rules for the rest of the industry. For example, rather than using an industry-wide rulemaking to address the issue of whether local telephone companies should be required to provide Digital Subscriber Line (DSL) service without requiring telephone service, FCC imposed this requirement solely on AT&T and Verizon during merger reviews. One stakeholder stated that by addressing broad policy issues through merger reviews rather than rulemakings, FCC is limiting public insight and participation in the regulatory process. Other stakeholders argue that FCC’s merger review process provides a needed public interest perspective. In addition to concerns about FCC’s merger review process, there are also concerns about how FCC enforces its merger conditions. For example, one observer noted that despite requests from consumer groups such as Media Access Project and Public Knowledge, FCC declined to adopt specific enforcement mechanisms to ensure compliance with a series of conditions imposed during the merger review of XM and Sirius, including an “a la carte” mandate and a requirement to provide noncommercial channels. FCC officials told us that each bureau is responsible for ensuring merger conditions are adhered to. As part of the general decrease in FCC staff that occurred from fiscal year 2003 to 2008, the number of engineers and economists at FCC declined. (See fig. 3.) From fiscal year 2003 to 2008, the number of engineers at FCC decreased by 10 percent, from 310 to 280. Similarly, from fiscal year 2003 to 2008, the overall number of economists decreased by 14 percent, from 63 to 54. Although the number of engineers and economists has decreased from 2003 to 2008, the percentage of the workforce comprised of engineers and economists remained the same. The overall decline in the number of key occupational staff occurred during a period of increased need for technical, economic, and business expertise. New technologies, such as rapid growth in handheld and wireless devices, are challenging existing regulatory structures. FCC also cited a number of economic issues that impact the expertise and workforce required, such as marketplace consolidation and the need to craft economic incentives for incumbent spectrum users to relocate to other spectrum. Additionally, 24 percent of FCC staff responses to the 2008 Office of Personnel Management (OPM) Federal Human Capital Survey disagreed with the statement “the skill level in my work unit has improved in the last year.” This was significantly more than the 17 percent of staff from all other agencies responding to the survey who disagreed with the statement. Similarly, several stakeholders we interviewed echoed the importance of increasing the level of expertise in certain areas at FCC and cited concerns regarding insufficient numbers of staff. In addition to the decrease in engineers and economists, FCC faces challenges in ensuring that its workforce remains experienced and skilled enough to meet its mission, including a large number of staff who will be eligible for retirement. FCC estimates that 45 percent of supervisory engineers are projected to be eligible for retirement by 2011. While FCC has started hiring a larger number of engineers to replace retiring engineers and augment its engineering staff, most hires have been at the entry level. Of the 53 engineers hired in fiscal years 2007 and 2008, 43 were entry-level hires. During this same period, 30 engineers retired. Stakeholders stated that recent graduates sometimes have little experience or understanding of how policies affect industry. Increasing the number of staff with backgrounds and experience in industry would help improve FCC’s understanding of industry issues and can lead to better policies, according to stakeholders. For economists, FCC faces an even higher share of staff eligible for retirement by 2011. FCC reports that, as of April 2009, 67 percent of supervisory economists will be eligible to retire, as shown in table 1. FCC may face challenges in addressing these impending retirements because 56 percent of nonsupervisory economists are also eligible to retire, and FCC has not hired any economists in fiscal years 2007 and 2008. Despite these trends, it is not clear how significantly the agency has been impacted in its ability to meet its mission. For example, the 2008 OPM Federal Human Capital Survey showed that, similar to the rest of government, 75 percent of FCC staff agreed with the statement that the workforce has the knowledge and skills necessary to accomplish organization goals. Agency officials also noted that they can shift staff from one bureau to another as needs arise and the regulatory environment changes. For example, as the need for tariff regulation decreased, FCC shifted staff from that area into other areas. However, an FCC official indicated that with the decrease in the number of experienced engineers throughout the agency, more work has shifted to OET. The official added that if the bureaus had additional resources to recruit and retain more experienced engineers, then they could handle more complex issues within the bureau without relying on OET as much. Furthermore, additional engineering staff would allow the bureau to reduce the amount of time it takes to conduct analyses and draft items. Additionally, former FCC officials told us that OSP needs additional resources to fulfill its mission. FCC faces multiple challenges in recruiting new staff. One challenge FCC faces (similar to other federal agencies) is the inability to offer more competitive pay. Additionally, not having an approved budget and working under congressional continuing resolutions has hampered hiring efforts for engineers and economists. Competing priorities may also delay internal decisions regarding hiring. For example, OSP has not received the budgetary allocation for hiring new economists in time for the annual American Economic Association meeting for at least the past 4 years. This meeting is the primary recruiting venue for recently-graduated economists. When FCC is not able to hire economists at the annual meeting, the agency potentially loses out on skilled employees who have been offered employment elsewhere. FCC officials told us that OSP has received permission to attend the 2010 American Economic Association meeting and hire at least one economist. FCC also faces issues regarding the morale and motivation of its staff. According to the 2008 OPM Federal Human Capital Survey, FCC staff responses were significantly lower than other federal agencies’ staff in areas related to motivation, engagement, and views of senior leadership. (See table 2.) Low levels of motivation, commitment, and personal empowerment may exacerbate the challenges FCC faces in recruiting and maintaining an experienced staff. For example, stakeholders told us that part of attracting and retaining professional staff is using and valuing their expertise. If expertise is not used or valued, as has occurred in some instances at FCC, then this can have a negative impact on FCC’s ability to recruit top candidates in a given professional field. FCC officials told us that in response to the results from the OPM Federal Human Capital Survey, FCC identified leadership and communication skills as areas of focus. To address these needs, FCC has developed an internal Web site that provides a forum for communication and solicitation of information, concerns, and suggestions from staff within FCC. In support of leadership, FCC is working to implement an executive leadership program for existing leaders and an emerging leadership training program to identify potential leaders within FCC and enhance their skills. FCC has instituted hiring and staff development programs designed to recruit new staff and develop the skills of its existing staff. While these programs are positive steps that can help attract, retain, and train new staff, it is not clear that these efforts are sufficient to address expertise gaps caused by retirements. Specific efforts include the following: FCC University was established to provide the resources needed to increase the fluency of commission staff in a number of competency areas. Subject matter experts have been continuously and actively involved in defining the training needs and evaluating, designing, and delivering internal courses, and in updating the courses available in the FCC University catalog. Excellence in Engineering Program: A program that includes both basic and advanced courses in communications technology, a graduate degree program in engineering, and a knowledge-sharing program to increase the exchange of information among staff. The Excellence in Engineering award recognizes engineers, scientists, and other technical staff for outstanding contributions performed in the course of their work at the commission. Excellence in Economic Analysis Program: A program to ensure staff is fluent in the principles of communication economics. The program consists of ongoing training and development opportunities targeted at, but not limited to, staff economists, economics training for noneconomists, and research tools such as data analysis software. Another component of the program is the Excellence in Economic Analysis Award, which recognizes outstanding contributions to economic analysis at FCC based on the impact of the contribution on FCC policy or its significance for the general base of knowledge in economics or public policy analysis. Engineer in Training Program: A combined recruitment and accelerated promotion program designed to attract recent engineering graduates and provide them with accelerated promotion opportunities through successful completion of on-the-job training. FCC has also pursued a variety of strategies to address new expertise needs and human capital challenges. In certain cases, FCC has been able to use direct-hire authority, which streamlines and expedites the typical competitive placement process. FCC was granted direct-hire authority from OPM in response to congressionally-mandated requirements for a national broadband plan. In addition to using direct-hire authority, FCC used appointing authorities, which are outside of the competitive hiring processes, such as Recovery Act appointing authority, temporary consultants, and student appointments, as well as details for staff from other federal agencies to more quickly ramp up its broadband efforts. FCC also makes multiple efforts to determine the critical skills and competencies that are needed to achieve its mission, including meetings with bureau chiefs, as well as surveys of supervisors and staff. It has set forth occupation-specific competencies for its three key professional areas—engineers, attorneys, and economists. As part of FCC’s workforce planning efforts, bureau and office chiefs identify, justify, and make their requests for positions, including the type of expertise needed, directly to the chairman’s office. According to FCC, the chairman’s office considers these requests from a commissionwide perspective, which includes the agency’s strategic goals, the chairman’s priorities, and other factors such as congressional mandates. The chairman’s office communicates the approval of requests directly to the bureau or office chiefs and informs the Office of Managing Director of the decision. Human resources works with bureaus and offices to implement approved hiring. This process can make it difficult for FCC to develop and implement a long-term workforce plan because workforce needs are driven by short- term priorities and are identified by compartmentalized bureaus rather than by a cohesive long-range plan that considers emerging issues. In addition, an FCC official noted that since FCC is a small agency and expertise needs change quickly, a particular area could be fully staffed with no need for additional hiring, but if two staff leave in a short time period, then an expertise gap could quickly develop and new staff would need to be hired. FCC officials told us that, because of this, they avoid laying out specific targets that might be impossible or undesirable to achieve due to evolving needs. Additionally, FCC officials told us that due to its size and limited hiring opportunities, it is important for the chairman and senior leadership to be able to adjust the goals identified in its Strategic Human Capital Plan. Without specific targets, FCC cannot monitor and evaluate the agency’s progress toward meeting its expertise needs. Previously, we identified several key principles that strategic workforce planning should address, including determining the critical skills and competencies that will be needed to achieve current and future programmatic results; developing strategies that are tailored to address gaps in the number, deployment, and alignment of human capital approaches for enabling and sustaining the contributions of all critical skills and competencies; and monitoring and evaluating an agency’s progress toward meeting its human capital goals. Periodic measurement of an agency’s progress toward human capital goals provides information for effective oversight by identifying performance shortfalls and appropriate corrective actions. For example, a workforce plan can include measures that indicate whether the agency executed its hiring, training, or retention strategies as intended and achieved the goals for these strategies, and how these initiatives changed the workforce’s skills and competencies. FCC has made efforts to determine the skills and competencies that are needed to achieve programmatic goals and has developed workforce hiring and training strategies. In addition, FCC’s current Strategic Human Capital Plan identifies skills and subspecialties needed in the future workforce. However, FCC’s Strategic Human Capital Plan does not establish specific targets for these needs or measures for evaluating its progress in meeting these skill needs. FCC officials told us they expect to develop a revised Strategic Human Capital Plan in support of a new FCC Strategic Plan, which they anticipate completing by the end of fiscal year 2010. Additionally, FCC is also in the process of finalizing an OPM-required accountability plan to accompany its Strategic Human Capital Plan. It remains unclear whether FCC’s actions are sufficient to ensure that it retains a skilled workforce that can achieve its mission in the future. FCC regulates the telecommunications industry—an industry that is critical to the nation’s economy and public safety and that directly affects the ways in which Americans conduct business, socialize, and get their news and entertainment. In recent years, the industry has rapidly evolved, and changing technologies have created new issues that span FCC bureaus and require the expertise of a variety of FCC staff. These changes highlight the need for FCC to ensure that its decisions are fully informed by promoting internal communication and coordination among various bureaus and offices, ensuring commissioner access to staff analyses, effectively collecting public input on its proposed policy changes, and developing methods to ensure it has the staff expertise needed to address these issues. However, we identified several challenges in these areas. At the bureau and office level, FCC’s lack of written procedures for facilitating the flow of information within the agency has in some cases led to ineffective interbureau coordination and allowed prior chairmen to limit internal communication among staff. In addition, it is unclear whether the roles of OET and OSP—two offices established to provide independent expertise on complex, crosscutting issues—are clearly defined or are overly subject to a chairman’s preferences. Without written interbureau coordination procedures or clearly defined roles and responsibilities, FCC may be limited in its ability to address crosscutting issues. At the commission level, the lack of statutory requirements or internal policies on commissioners’ rights and responsibilities during the decision- making process, including their right to bureau and office analysis, has allowed some chairmen to control how and when commissioners receive information from the bureaus and offices. Other independent regulatory agencies have varied in how they address this issue. Ultimately, if commissioners do not have adequate access to information, then the benefits of the commission structure—robust group discourse and informed deliberation and decision making—may be hampered. In addition, while FCC relies heavily on public input to inform its decisions, we found two primary weaknesses in its processes for collecting that input. First, FCC’s use of NPRMs to pose broad questions without providing actual rule text can limit stakeholders’ ability to determine either what action FCC is considering or what information would be most helpful to FCC when developing a final rule. Second, although FCC has developed rules intended to protect the fairness of ex parte proceedings, FCC neither provides detailed guidance on what constitutes a sufficient ex parte summary, nor has a process for proactively ensuring that ex parte summaries are complete. If parties are able to submit vague ex parte summaries that may not fully reflect meetings between FCC officials and outside parties, then stakeholders will continue to question whether commission decisions are being influenced by information that was not subject to public comment or rebuttal and that, in some cases, is submitted just before a commission vote. FCC is currently exploring new methods of collecting public comment and potential revisions to its ex parte process. Finally, at a time when the telecommunications industry has become increasingly complex, a large percentage of FCC’s economists and engineers will be eligible for retirement by 2011, and FCC has faced challenges in recruiting new staff. FCC has taken several positive steps to help meet its workforce needs, including instituting hiring and staff development programs and beginning efforts to identify its current workforce expertise needs. However, continued focus on identifying and instituting additional methods that improve its flexibility to meet its expertise needs, and developing measures for tracking its progress toward meeting its needs, will help to ensure that FCC is well-positioned to anticipate and address its current and future workforce and expertise needs. We have identified four areas of concern and are making seven recommendations to address these concerns. To ensure interbureau coordination on crosscutting issues, we recommend that the Federal Communications Commission (FCC) take the following two actions: Develop written policies outlining how and when FCC will identify issues under the jurisdiction of more than one bureau; determine which bureau will serve as the lead on crosscutting issues and outline the responsibilities entailed regarding coordinating with other bureaus; and ensure that staff from separate bureaus and offices can communicate on issues spanning more than one bureau. Review whether it needs to redefine the roles and responsibilities of the Office of Engineering and Technology (OET) and the Office of Strategic Planning and Policy Analysis (OSP) and make any needed revisions. To clarify FCC’s policies on providing commissioners access to information from bureaus and offices about agenda items, we recommend FCC take the following two actions: Each chairman, at the beginning of his or her term, develop and make publicly available internal policies that outline the extent to which commissioners can access information from the bureaus and offices during the decision-making process, including how commissioners can request and receive information. Provide this policy to FCC’s congressional oversight committees to aid their oversight efforts. To improve the transparency and effectiveness of the decision-making process, we recommend that FCC take the following two actions: Where appropriate, include the actual text of proposed rules or rule changes in either a Notice of Proposed Rulemaking or a Further Notice of Proposed Rulemaking before the commission votes on new or modified rules. Revise its ex parte policies to include modifying its current guidance to further clarify FCC’s criteria for determining what is a sufficient ex parte summary and address perceived discrepancies at the commission on this issue; clarifying FCC officials’ roles in ensuring the accuracy of ex parte summaries and establish a proactive review process of these summaries; and creating a mechanism to ensure all commissioners are promptly notified of substantive filings made on items that are on the Sunshine Agenda. To improve FCC’s workforce planning efforts, we recommend that FCC take the following action: In revising its current Strategic Human Capital Plan, include targets that identify the type of workforce expertise needed, strategies for meeting these targets—including methods to more flexibly augment the workforce—and measures for tracking progress toward these targets. FCC provided written comments, which are reproduced in appendix III. In its comments FCC generally concurred with our recommendations and noted that they have already begun taking steps to address the areas of concern identified in our recommendations. For example, FCC stated that it is in the midst of a review of FCC’s existing processes. As part of this process, FCC is reviewing prior procedures for interbureau communication, as well as prior and current practices for commissioner and staff communication. FCC stated that it would identify and incorporate lessons learned and best practices into future internal procedures. FCC did not specifically state whether future policies on commissioner access to bureau and office information during the decision- making process would be made publicly available or provided to FCC’s congressional oversight committees. We believe these would be important steps in improving the transparency of FCC’s decision-making process. FCC also did not specifically discuss our recommendation that it review whether it needs to redefine the roles and responsibilities of OET and OSP and make any needed revisions. Regarding the public comment process, FCC stated that it has worked to include the text of proposed rules in recently issued NPRMs. However, FCC did not state whether this would be an ongoing policy. FCC also noted that the Office of General Counsel is in the midst of reviewing proposals for modifying the current ex parte process, and stated that this may lead to a rulemaking to address this issue. Finally, FCC believes that it does not face significant challenges in recruiting top candidates and stated that its unique mission and the influence of its regulatory activities on the communications industry and practices help it attract qualified candidates. However, it concurred that revisions to the current Strategic Human Capital Plan should include targets and measures for tracking progress toward these targets. We recognize FCC’s efforts to enhance internal and external communication, to update its comment filing system, and to continue to review other existing processes and workforce planning efforts. However, addressing our specific recommendations will further enhance FCC’s efforts to date by promoting internal communication and coordination, clarifying policies on commissioner access to staff analyses, enhancing FCC’s methods for collecting public input, and developing methods to ensure it has the staff expertise it needs. In addition, we provided the Federal Energy Regulatory Commission, Federal Trade Commission, and Nuclear Regulatory Commission with a draft of this report for review and comment. They did not offer any comments on our findings or recommendations, but provided technical corrections which we incorporated. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this report. At that time, we will send copies to the Chairman of the Federal Communications Commission and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. The report examines Federal Communications Commission’s (FCC) organization, decision-making process, and personnel management. In particular, the report provides information on (1) the extent to which FCC’s bureau structure presents challenges for the agency in adapting to an evolving marketplace; (2) the extent to which FCC’s decision-making processes present challenges for FCC, and what opportunities, if any, exist for improvement; and (3) the extent to which FCC’s personnel management and workforce planning efforts ensure that FCC has the workforce needed to achieve its mission. To respond to the overall objectives of this report, we interviewed current and former officials from FCC, including former chief economists and chiefs of staff, bureau and office chiefs and acting bureau and office chiefs, commissioners, and chairmen. In addition, we reviewed FCC documents, as well as relevant legislation, federal regulations, and GAO reports on the FCC and areas of focus for this review such as internal controls and workforce planning. We also interviewed industry associations representing broadcast and cable television, public television, consumer electronics, wireless, and telecommunications companies, public interest groups, and other individuals, such as academics with extensive telecommunications experience. Table 3 lists the organizations with whom we spoke. To describe the challenges FCC’s bureau structure presents the agency in adapting to an evolving marketplace, we reviewed FCC’s major internal reorganizations since the Telecommunications Act of 1996. We analyzed FCC procedures, applicable laws, and reviewed academic literature on organizational theory and various FCC reform proposals. We also reviewed academic literature on the commission structure, organizational theory, and various FCC reform proposals from a number of stakeholders. We used GAO’s internal control and management tool to identify key mechanisms for facilitating the flow of information within an organization. To determine challenges the commission decision-making process presents for FCC and opportunities for improvement, we reviewed literature on federal rulemaking and potential reforms and on the commission structure and decision-making process. We reviewed FCC internal decision-making documents and the public comments of current and former FCC commissioners and former chairmen to determine how the decision-making process works. We also interviewed officials from independent regulatory agencies including the Nuclear Regulatory Commission, Federal Energy Regulatory Commission, and the Federal Trade Commission and, where available, reviewed their internal commission procedures to understand how other independent regulatory agencies implement the commission decision-making process. We reviewed FCC’s decision-making procedures and public comment and ex parte rules, and compared certain aspects to standards established in GAO’s internal control standards and other relevant documents. In addition, we interviewed industry, consumer advocate, and regulatory representatives to gain their perspectives on providing information to FCC during the decision-making process and to identify alternative approaches to the decision-making process. Finally, we reviewed FCC documents, policy papers from outside stakeholders, letters to the Presidential Transition Task Team, as well as proposed legislation to determine proposals for altering FCC’s public comment process. To examine whether FCC’s personnel management and workforce planning efforts ensure that FCC has the workforce needed to achieve its mission, we reviewed prior GAO products related to strategic workforce planning and human capital challenges. We then reviewed FCC-generated data on overall staff levels, hiring, attrition, and retirement eligibility over the period of 2003 to 2008. We also reviewed FCC’s 2007-2011 Strategic Human Capital Plan to determine the challenges FCC has identified for addressing future workforce issues, as well as its proposed solutions. We reviewed FCC’s methods for identifying needed skill sets and competencies, including surveys of staff and focus groups. We analyzed results from the Office of Personnel Management’s (OPM) Federal Human Capital Survey for 2008 and compared FCC’s responses on various items with the responses of the rest of U.S. government staff. We performed our review from August 2008 to October 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our review objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. FCC staff is organized into seven operating bureaus and 10 staff offices. The bureaus’ responsibilities include: processing applications for licenses and other filings; analyzing complaints; conducting investigations; developing and implementing regulatory policies and programs; and taking part in hearings. FCC’s offices provide support services for the bureaus and commission. Office of Inspector General: The Office of Inspector General conducts and supervises audits and investigations relating to FCC’s operations. The Inspector General reports to the chairman and informs the chair and Congress of fraud or any serious problems with the administration of FCC programs and operations discovered during audits and investigations; reviews and recommends corrective action, where appropriate; and reports on progress made in the implementation of those corrective actions. Office of Engineering and Technology: The Office of Engineering and Technology (OET) advises FCC on engineering matters, manages spectrum, and provides leadership in creating new opportunities for competitive technologies and services for the American public. OET allocates spectrum for nonfederal use and provides expert advice on technical issues before the commission, including helping commissioners understand the tradeoffs of technical issues. In addition to providing technical guidance to the commissioners, FCC’s other bureaus rely on OET to provide leadership on high-level technical and engineering issues that do not fall within the scope of a particular bureau and to provide advice on technical issues handled in the bureaus. Office of General Counsel: The Office of General Counsel serves as the chief legal advisor to the commission and to its various bureaus and offices. The General Counsel also represents the commission in litigation in federal courts, recommends decisions in adjudicatory matters before the commission, assists the commission in its decision-making capacity, and performs a variety of legal functions regarding internal and other administrative matters. Office of Managing Director: The Office of Managing Director functions as chief operating official, serving under the direction and supervision of the chairman. The office develops and manages FCC’s budget and financial programs, personnel management process and policy, develops and implements agencywide management systems, coordinates the commission meeting schedule, and manages the distribution and publication of official FCC documents. Office of Media Relations: The Office of Media Relations is responsible for the dissemination of information on commission issues. The office is responsible for coordinating media requests for information and interviews on FCC proceedings and activities and for encouraging and facilitating media dissemination of commission announcements, orders, and other information. Office of Administrative Law Judges: The Office of Administrative Law Judges is responsible for conducting the hearings ordered by the commission. The hearing function includes acting on interlocutory requests filed in the proceedings, such as petitions to intervene, petitions to enlarge issues, and contested discovery requests. An administrative law judge, appointed under the Administrative Procedures Act, presides at the hearing during which documents and sworn testimony are received in evidence, and witnesses are cross-examined. At the conclusion of the evidentiary phase of a proceeding, the presiding administrative law judge writes and issues an initial decision which may be appealed to the commission. Office of Legislative Affairs: The Office of Legislative Affairs is the FCC’s liaison to Congress and provides lawmakers with information regarding FCC regulatory decisions, answers to policy questions, and assistance with constituent concerns. The office also prepares FCC witnesses for congressional hearings and helps create FCC responses to legislative proposals and congressional inquiries. Additionally, the office is a liaison to other federal agencies, as well as state and local governments. Office of Communications and Business Opportunities: The Office of Communications and Business Opportunities provides advice to the commission on issues and policies concerning opportunities for ownership by small, minority, and women-owned communications businesses. The office works with entrepreneurs, industry, public interest organizations, individuals, and others to provide information about FCC policies, increase ownership and employment opportunities, foster a diversity of voices and viewpoints over the airwaves, and encourage participation in FCC proceedings. Office of Workplace Diversity: The Office of Workplace Diversity advises the commission on all issues related to workforce diversity, affirmative recruitment, and equal employment opportunity. Office of Strategic Planning and Policy Analysis: The Office of Strategic Planning and Policy Analysis (OSP) is responsible for working with the chairman, the commissioners, bureaus, and offices to develop a strategic plan identifying short- and long-term policy objectives for the agency. OSP consists of economists, attorneys, and MBAs who serve as expert consultants to the commission in areas of economic, business, and market analysis and other subjects that cut across traditional lines, such as the Internet. The office also reviews legal trends and developments not necessarily related to current FCC proceedings, such as intellectual property law, the Internet, and e-commerce issues. International Bureau: The International Bureau represents the commission in satellite and international matters. This includes advising the chairman and commissioners on matters of international telecommunications policy and the status of the commission’s actions to promote the vital interests of the American public in international commerce, national defense, and foreign policy areas. The bureau also develops, recommends, and administers policies, rules, and procedures for the authorization and regulation of international telecommunications facilities and service and domestic and international satellite systems. Wireless Telecommunications Bureau: The Wireless Telecommunications Bureau handles all FCC domestic wireless telecommunications programs and policies—except those involving public safety, satellite communications, or broadcasting—including licensing, enforcement, and regulatory functions. Wireless communications services include cellular telephone, paging, personal communications services, and other commercial and private radio services. The bureau also regulates the use of radio spectrum to fulfill the communications needs of business, aircraft and ship operators, and individuals. The bureau is responsible for implementing the competitive bidding authority for spectrum auctions. Enforcement Bureau: The Enforcement Bureau is responsible for enforcing provisions of the Communications Act of 1934, FCC’s rules and orders, and the terms and conditions of station authorizations. Major areas of enforcement that are handled by the Enforcement Bureau are (1) consumer protection enforcement, (2) local competition enforcement, and (3) public safety and homeland security enforcement. Consumer and Governmental Affairs Bureau: The Consumer and Governmental Affairs Bureau (CGB) develops and implements the commission’s consumer policies, including disability access. The bureau conducts consumer outreach and education and maintains a Consumer Center that responds to consumer inquiries and complaints. CGB also maintains collaborative partnerships with state, local, and tribal governments in areas such as emergency preparedness and implementation of new technologies. Media Bureau: The Media Bureau develops, recommends, and administers the policy and licensing programs relating to electronic media, including cable television, broadcast television, and radio in the United States and its territories. The Media Bureau also handles postlicensing matters regarding direct broadcast satellite service. Wireline Competition Bureau: The Wireline Competition Bureau develops and recommends policy goals, objectives, programs, and plans for the commission on matters concerning wireline telecommunications. The Wireline Competition Bureau’s overall objectives include ensuring choice, opportunity, and fairness in the development of wireline telecommunications services and markets; developing deregulatory initiatives; promoting economically efficient investment in wireline telecommunications infrastructure; promoting the development and widespread availability of wireline telecommunications services; and fostering economic growth. Public Safety and Homeland Security Bureau: The Public Safety and Homeland Security Bureau is responsible for developing, recommending, and administering the agency’s policies pertaining to public safety communications issues. These policies include 911 and E911, operability and interoperability of public safety communications, communications infrastructure protection and disaster response, and network security and reliability. The bureau also serves as a clearinghouse for public safety communications information and takes the lead on emergency response issues. In addition to the contact listed above, Andrew Von Ah (Assistant Director), Eli Albagli, Pedro Almoguera, Thomas Beall, Timothy Bober, Crystal Huggins, Delwen Jones, Aaron Kaminsky, Joshua Ormond, Sarah Veale, and Mindi Weisenbloom made major contributions to this report. | Rapid changes in the telecommunications industry, such as the development of broadband technologies, present new regulatory challenges for the Federal Communications Commission (FCC). Government Accountability Office (GAO) was asked to determine (1) the extent to which FCC's bureau structure presents challenges for the agency in adapting to an evolving marketplace; (2) the extent to which FCC's decision-making processes present challenges for FCC, and what opportunities, if any, exist for improvement; and (3) the extent to which FCC's personnel management and workforce planning efforts face challenges in ensuring that FCC has the workforce needed to achieve its mission. GAO reviewed FCC documents and data and conducted literature searches to identify proposed reforms, criteria, and internal control standards and compared them with FCC's practices. GAO also interviewed current and former FCC chairmen and commissioners, industry stakeholders, academic experts, and consumer representatives. FCC consists of seven bureaus, with some structured along functional lines, such as enforcement, and some structured along technological lines, such as wireless telecommunications and media. Although there have been changes in FCC's bureau structure, developments in the telecommunications industry continue to create issues that span the jurisdiction of several bureaus. However, FCC lacks written procedures for ensuring that interbureau collaboration and communication occurs. FCC's reliance on informal coordination has created confusion among the bureaus regarding who is responsible for handling certain issues. In addition, the lack of written procedures has allowed various chairmen to determine the extent to which interbureau collaboration and communication occurs. This has led to instances in which FCC's final analyses lacked input from all relevant staff. Although FCC stated that it relies on its functional offices, such as its engineering and strategic planning offices, to address crosscutting issues, stakeholders have expressed concerns regarding the chairman's ability to influence these offices. Weaknesses in FCC's processes for collecting and using information also raise concerns regarding the transparency and informed nature of FCC's decision-making process. FCC has five commissioners, one of which is designated chairman. FCC lacks internal policies regarding commissioner access to staff analyses during the decision-making process, and some chairmen have restricted this access. Such restrictions may undermine the group decision-making process and impact the quality of FCC's decisions. In addition, GAO identified weaknesses in FCC's processes for collecting public input on proposed rules. Specifically, FCC rarely includes the text of a proposed rule when issuing a Notice of Proposed Rulemaking to collect public comment on a rule change, although some studies have noted that providing proposed rule text helps focus public input. Additionally, FCC has developed rules regarding contacts between external parties and FCC officials (known as ex parte contacts) that require the external party to provide FCC a summary of the new information presented for inclusion in the public record. However, several stakeholders told us that FCC's ex parte process allows vague ex parte summaries and that in some cases, ex parte contacts can occur just before a commission vote, which can limit stakeholders' ability to determine what information was provided and to rebut or discuss that information. FCC faces challenges in ensuring it has the expertise needed to adapt to a changing marketplace. For example, a large percentage of FCC's economists and engineers are eligible to retire in 2011, and FCC faces difficulty recruiting top candidates. FCC has initiated recruitment and development programs and has begun evaluating its workforce needs. GAO previously noted that strategic workforce planning should include identifying needs, developing strategies to address these needs, and tracking progress. However, FCC's Strategic Human Capital Plan does not establish targets for its expertise needs, making it difficult to assess the agency's progress in addressing its needs. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Airports are a linchpin in the nation’s air transportation system. Adequate and predictable funding is needed for airport development. The National Civil Aviation Review Commission—established by Congress to determine how to fund U.S. civil aviation—reported in December 1997 that more funding is needed to develop the national airport system’s capacity, preserve small airports’ infrastructure, and fund new safety and security initiatives. Funding is also needed to mitigate the noise and other negative environmental effects of airports on nearby communities. Airports provide important economic benefits to the nation and their communities. Air transportation accounted for $63.2 billion, or 0.8 percent, of U.S. Gross Domestic Product in 1996, according to the Department of Transportation’s statistics. 1.6 million people are employed at airports in 1998, according to the Airports Council International-North America. In our own study of airport privatization in 1996, we found that the 69 largest U.S. airports had 766,500 employees (686,000 private and 80,500 public employees). In 1996, tax-exempt bonds, the Airport Improvement Program (AIP), and passenger facility charges (PFC) together provided about $6.6 billion of the $7 billion in airport funding. State grants and airport revenue contributed the remaining funding for airports. Table 1 lists these sources of funding and their amounts in 1996. The amount and type of funding varies with airports’ size. The nation’s 71 largest airports (classified by FAA as large hubs and medium hubs), which accounted for almost 90 percent of all passenger traffic, received more than $5.5 billion in funding in 1996, while the 3,233 other national system airports received about $1.5 billion. As shown in figure 1, large and medium hub airports rely most heavily on private airport bonds, which account for roughly 62 percent of their total funding. By contrast, the 3,233 smaller national system airports obtained just 14 percent of their funding from bonds. For these smaller airports, AIP funding constitutes a much larger portion of their overall funding—about half. Airports’ planned capital development over the period 1997 through 2001 may cost as much as $10 billion per year, or $3 billion more per year than in 1996. Figure 2 compares airports’ total funding for capital development in 1996 with their annual planned spending for development. Funding for 1996, the bar on the left, is shown by source (AIP, PFCs, state grants, and operating revenues). Planned spending for future years, the bar on the right, is shown by the relative priority FAA has assigned to the projects, as follows: Reconstruction and mandated projects, FAA’s highest priorities, total $1.4 billion per year and are for projects to maintain existing infrastructure (reconstruction) or to meet federal mandates, including safety, security, and environmental requirements, including noise mitigation requirements. Other high-priority projects, primarily adding capacity, account for another $1.4 billion per year. Other AIP-eligible projects, a lower priority for FAA, such as bringing airports up to FAA’s design standards, add another $3.3 billion per year for a total of $6.1 billion per year. Finally, airports anticipate spending another $3.9 billion per year on projects that are not eligible for AIP funding, such as expanding commercial space in terminals and constructing parking garages. Other high-priority projects $1,360 Reconstruction & mandates $1,414 Planned development 1997 through 2001 (annualized) Within this overall picture of funding and planned spending for development, it is difficult to develop accurate estimates of the extent to which AIP-eligible projects are deferred or canceled because some form of funding cannot be found for them. FAA does not maintain information on whether eligible projects that do not receive AIP funding are funded from other sources, deferred, or canceled. We were not successful in developing an estimate from other information sources, mainly because comprehensive data are not kept on the uses to which airport and special facility bonds are put. But even if the entire bond financing available to smaller airports were spent on AIP-eligible projects, these airports would have, at a minimum, about $945 million a year in AIP-eligible projects that are not funded. Conversely, if none of the financing from bonds were applied to AIP-eligible projects, then the full $3 billion funding shortfall would apply to these projects. The difference between current and planned funding for development is bigger, in percentage terms, for smaller airports than for larger ones. Funding for the 3,233 smaller airports in 1996 was a little over half of the estimated cost of their planned development, producing a difference of about $1.4 billion (see fig. 3). This difference would be even greater if it were not for $250 million in special facility bonding for a single cargo/general aviation airport. For this group of airports, the $782 million in 1996 AIP funding exceeds the annual estimate of $750 million for FAA’s highest-priority projects—those involving reconstruction, noise mitigation, and compliance with federal mandates. However, there is no guarantee that the full amount of AIP funding will go only to the highest-priority projects, because one-third of AIP funds are awarded to airports on the basis of the number of passengers boarding commercial flights and not necessarily on the basis of projects’ priority. Planned development 1997 through 2001 (annualized) As a proportion of total funding, the potential funding difference between 1996 funding and planned development for the 71 large and medium hub airports is comparatively less than for their smaller counterparts (see fig. 3 and fig. 4). Larger airports potential shortfall of $1.5 billion represents 21 percent of their planned development costs, while smaller airports’ potential shortfall of $1.4 billion represents 48 percent of their development costs. Therefore, while larger and smaller airports’ respective shortfalls are similar in size, the greater scale of larger airports’ planned development causes them to differ considerably in proportion. Figure 4 also indicates that $590 million in AIP funding falls $74 million short of the estimated cost to meet FAA’s highest priorities for development—reconstruction, noise mitigation, and compliance with federal mandates. Planned development 1997 through 2001 (annualized) Proposals to increase airport funding or make better use of existing funding vary in the extent to which they would help different types of airports and close the gap between funding and the costs of planned development. For example, increasing AIP funding would help smaller airports more because current funding formulas would channel an increasing proportion of AIP to smaller airports. Conversely, any increase in PFC funding would help larger airports almost exclusively because they handle more passengers and are more likely to have a PFC in place. Changes to the current design of AIP or PFCs could, however, lessen the concentration of benefits to one group of airports. FAA has also used other mechanisms to better use and extend existing funding sources, such as letters of intent, state block grants, and pilot projects to test innovative financing. So far, these mechanisms have had mixed success. Under the existing distribution formula, increasing total AIP funding would proportionately help smaller airports more than large and medium hub airports. Appropriated AIP funding for fiscal year 1998 was $1.7 billion; large and medium hub airports received nearly 40 percent and all other airports about 60 percent of the total. We calculated how much funding each group would receive under the existing formula, at funding levels of $2 billion and $2.347 billion. We chose these funding levels because the National Civil Aviation Review Commission and the Air Transport Association (ATA), the commercial airline trade association, have recommended that future AIP funding levels be stabilized at a minimum of $2 billion annually, while two airport trade groups—the American Association of Airport Executives and the Airports Council International-North America—have recommended a higher funding level, such as AIP’s authorized funding level of $2.347 billion for fiscal year 1998. Table 2 shows the results. As indicated, smaller airports’ share of AIP would increase under higher funding levels if the current distribution formula were used to apportion the additional funds. Increasing PFC-based funding, as proposed by the Department of Transportation and backed by airport groups, would mainly help larger airports, for several reasons. First, large and medium hub airports, which accounted for nearly 90 percent of all passengers in 1996, have the greatest opportunity to levy PFCs. Second, such airports are more likely than smaller airports to have an approved PFC in place. Finally, large and medium hub airports would forego little AIP funding if the PFC ceiling were raised or eliminated. Most of these airports already return the maximum amount that must be turned back for redistribution to smaller airports in exchange for the opportunity to levy PFCs. If the airports currently charging PFCs were permitted to increase them beyond the current $3 ceiling, total collections would increase from the $1.35 billion that FAA estimates was collected during 1998. Most of the additional collections would go to larger airports. For every $1 increase in the PFC ceiling, we estimate that large and medium hub airports would collect an additional $432 million, while smaller airports would collect an additional $46 million (see fig. 5). In total, a $4 PFC ceiling would yield $1.9 billion, a $5 PFC would yield $2.4 billion, and a $6 PFC would yield $2.8 billion in total estimated collections. Increased PFC funding is likely to be applied to different types of projects than would increased AIP funding. Most AIP funding is applied to “airside” projects like runways and taxiways. “Landside” projects, such as terminals and access roads, are lower on the AIP priority list. However, for some airports, congestion may be more severe at terminals and on access roads than on airfields, according to airport groups. The majority of PFCs are currently dedicated to terminal and airport access projects and interest payments on debt, and any additional revenue from an increase in PFCs may follow suit. In recent years, the Congress has directed FAA to undertake other steps designed to allow airports to make better use of existing AIP funds. Thus far, some of these efforts, such as letters of intent and state block grants, have been successful. Others, such as pilot projects to test innovative financing and privatization, have received less interest from airports and are still being tested. Finally, one idea, using AIP grants to capitalize state revolving loan funds, has not been attempted but could help small airports. Implementing this idea would require legislative changes. Letters of intent are an important source of long-term funding for airport capacity projects, especially for larger airports. These letters represent a nonbinding commitment from FAA to provide multiyear funding to airports beyond the current AIP authorization period. Thus, the letters allow airports to proceed with projects without waiting for future AIP grants and provide assurance that allowable costs will be reimbursed. Airports may also be able to receive more favorable interest rates on bonds that are sold to finance a project if the federal government has indicated its support for the project in a letter of intent. For a period, FAA stopped issuing letters of intent, but since January 1997, it has issued 10 letters with a total funding commitment of $717.5 million. Currently, FAA has 28 open letters committing a total of $1.180 billion through 2010. Letters of intent for large and medium airports account for $1.057 billion, or 90 percent, of that total. Airports’ demand for the letters continues—FAA expects at least 10 airports to apply for new letters of intent in fiscal year 1999. In 1996, we testified before this Subcommittee that FAA’s state block grant pilot program was a success. The program allows FAA to award AIP funds in the form of block grants to designated states, that, in turn, select and fund AIP projects at small airports. States then decide how to distribute these funds to small airports. In 1996, the program was expanded from seven to nine states and made permanent. Both FAA and the participating states believe that they are benefiting from the program. In recent years, FAA, with congressional urging and direction, has sought to expand airports’ available capital funding through more innovative methods, including the more flexible application of AIP funding and efforts to attract more private capital. The 1996 Federal Aviation Reauthorization Act gave FAA the authority to test three new uses for AIP funding—(1) projects with greater percentages of local matching funds, (2) interest costs on debt, and (3) bond insurance. In all, these three innovative uses could be tested on up to 10 projects. Another innovative financing mechanism that we’ve recommended—using AIP funding to help capitalize state airport revolving funds—while not currently permitted, may hold some promise. FAA is testing 10 innovative uses of AIP funding totaling $24.16 million, all at smaller airports. Five projects tested the benefits of the first innovative use of AIP funding—allowing local contributions in excess of standard matching amount, which for most airports and projects is otherwise fixed at 10 percent of the AIP grant. FAA and state aviation representatives generally support the concept of flexible matching because it allows projects to begin that otherwise might be postponed for lack of sufficient FAA funding; in addition, flexible funding may ultimately increase funding to airports. The latter five projects test the other two mechanisms for innovative financing. Applicants have generally shown less interest in the latter two options, which, according to FAA officials, warrant further study. Some federal transportation, state aviation, and airport bond rating and underwriting officials believe using AIP funding to capitalize state revolving loan funds would help smaller airports obtain additional financing. Currently, FAA cannot use AIP funds for this purpose because AIP construction grants can go only to designated airports and projects. However, state revolving loan funds have been successfully employed to finance other types of infrastructure projects, such as wastewater projects and, more recently, drinking water and surface transportation projects. While loan funds can be structured in various ways, they use federal and state moneys to capitalize the funds from which loans are then made. Interest and principal payments are recycled to provide additional loans. Once established, a loan fund can be expanded through the issuance of bonds that use the fund’s capital and loan portfolio as collateral. These revolving funds would not create any contingent liability for the U.S. government because they would be under state control. Declining airport grants and broader government privatization efforts spurred interest in airport privatization as another innovative means of bringing more capital to airport development, but thus far efforts have shown only limited results. As we previously reported, the sale or lease of airports in the United States faces many hurdles, including legal and economic constraints. As a way to test privatization’s potential, the Congress directed FAA to establish a limited pilot program under which some of these constraints would be eased. Starting December 1, 1997, FAA began accepting applications from airports to participate in the pilot program on a first-come, first-served basis for up to five airports. Thus far, two airports have applied to be part of the program. Mr. Chairman, this concludes our prepared statement. We would be happy to respond to any questions that you or Members of the Subcommittee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO discussed airport funding issues, focusing on: (1) the amount airports are spending on capital development and the sources of those funds; (2) comparing airports' plans for development with current funding levels; and (3) what effect will various proposals to increase or make better use of existing funding have on airports' ability to fulfill their capital development plans. GAO noted that: (1) 3,304 airports that make up the federally supported national airport system obtained about $7 billion from federal and private sources for capital development; (2) more than 90 percent of this funding came from three sources: tax-exempt bonds issued by states and local airport authorities, federal grants from the Federal Aviation Administration (FAA) Airport Improvement Program (AIP), and passenger facility charges paid on airline tickets; (3) the magnitude and type of funding varies with airports' size; (4) the nation's 71 largest airports accounted for nearly 80 percent of the total funding; (5) airports planned to spend as much as $10 billion per year for capital development for the years 1997 through 2001, or $3 billion per year more than they were able to fund in 1996; (6) the difference between funding and the costs of planned development is greater for smaller commercial and general aviation airports than for their larger counterparts; (7) smaller airports' funding would cover only about half the costs of their planned development, while larger airports' funding would cover about 4/5 of their planned development; (8) airports' planned development can be divided into four main categories based on the funding priorities of AIP; (9) about $1.4 billion per year was planned for safety, security, environmental, and reconstruction projects, FAA's highest priorities for AIP funding; (10) another $1.4 billion per year was planned for projects FAA regards as the next highest priority, primarily adding airport capacity; (11) other projects FAA considers to be lower in priority, such as bringing airports up to FAA's design standards, add another $3.3 billion per year; (12) airports anticipated spending another $3.9 billion per year on projects that are not eligible for AIP funding, such as expanding commercial space in terminals and constructing parking garages; (13) several proposals to increase or make better use of existing funding have emerged in recent years, including the amount of AIP funding and raising the maximum amount airports can levy in passenger facility charges; (14) under current formulas, increasing the amount of AIP funding would help small airports more than larger airports, while raising passenger facility charges would help larger airports more; and (15) other initiatives, such as AIP block grants to states, have had varied success, but none appears to offer a major breakthrough in reducing the shortfall between funding and airports' plans for development. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
CMS, an operating division of HHS, administers Medicare, Medicaid, and the State Children’s Health Insurance Program. As administrator of Medicare, which paid about $215 billion in benefits to approximately 39.5 million Medicare beneficiaries in fiscal year 2000, CMS is the nation’s largest health insurer. Although most participating providers comply with Medicare billing rules, inadvertent errors or intentional misrepresentations that result in overpayments to providers do occur. These overpayments represent money owed back to Medicare. According to the HCFA Financial Report for Fiscal Year 2000, about $8.0 billion out of $8.1 billion of the debts reported owed to CMS originated in the Medicare program. CMS Medicare debts consist largely of overpayments to hospitals, skilled nursing facilities, physicians, and other providers of covered services and supplies under Part A (hospital insurance) and Part B (supplemental medical insurance) of the Medicare program. We examined two types of Medicare debts: Medicare secondary payer (MSP) debts. MSP debts arise when Medicare pays for a service that is subsequently determined to be the financial responsibility of another payer. Cases that result in MSP debts include those in which beneficiaries have (1) other health insurance furnished by their employer or their spouse’s employer (or, in certain instances, another family member) that covers the medical services provided, (2) occupational injuries, illnesses, and conditions covered by workers’ compensation, and (3) injuries, illnesses, and conditions related to a liability or no-fault insurance settlement, judgment, or award. Non-MSP debts. Although Medicare is phasing out this payment method, Medicare has paid certain institutional providers interim amounts based on their historical service to beneficiaries. Medicare contractors retrospectively adjust these payments based on their review of provider costs. When a provider's cost-reporting year is over, the provider files a report specifying its costs of serving Medicare beneficiaries. Cost report debts arise when the cost report settlement process, which includes audits and reviews by Medicare contractors, determines that the amount an institution was paid based on its cost report exceeds the final settlement amount. Another type of non-MSP debt related to cost reporting is unfiled cost report debt. If an institutional provider fails to submit a timely cost report, CMS establishes an unfiled cost report debt. The amount of the debt equals the full amount disbursed for the year in which the provider failed to submit a timely report. Most providers have an ongoing business relationship with the Medicare program; therefore, contractors are able to collect most non-MSP debts by offsetting subsequent Medicare payments to providers. However, if offsetting subsequent payments does not fully liquidate the debt (e.g., because the provider has left the Medicare program), unpaid balances more than 180 days delinquent are subject to DCIA’s debt-referral requirements. CMS refers its eligible MSP and non-MSP debts to PSC, which provides debt management services for certain HHS operating divisions. Under DCIA, federal agencies are required to refer all eligible debts that are more than 180 days delinquent to Treasury or a Treasury-designated debt collection center. In 1999, Treasury designated PSC a debt collection center for HHS, allowing PSC to service certain debts, including MSP and unfiled cost report debts. PSC is responsible for attempting to collect MSP debts, obtaining cost reports for unfiled cost report debts, reporting MSP and unfiled cost report debts to TOP, and referring other types of Medicare debts to Treasury’s FMS for cross-servicing. In September 2000, we reported that CMS was slow to implement DCIA but could increase Medicare overpayment collections if it fully implemented the referral requirements of the act. We recommended, and CMS agreed, that CMS fully implement DCIA by transferring Medicare debts to PSC or Treasury for collection as soon as they became delinquent and were determined to be eligible. We also recommended that CMS refer the backlog of eligible Medicare debts to PSC as quickly as possible. We noted in the report that CMS had two pilot projects under way that were designed to expedite the transfer of delinquent Medicare debts for collection action. One pilot covered certain MSP debts valued at $5,000 or more, and the other covered certain non-MSP debts, primarily related to cost report audits, of $100,000 or more.Contractors participating in the pilots were to (1) verify the amount of a delinquent debt and ensure that it was still uncollected, (2) issue a DCIA intent letter indicating that nonpayment would result in the debt’s referral to PSC, and (3) record the debt in a central CMS database used to transmit the debt to PSC for collection.CMS’s goal is to have referred all eligible Medicare debts for collection action by the end of fiscal year 2002. As shown in table 1, CMS reported that about $6.6 billion of Medicare debts were more than 180 days delinquent or classified as currently not collectible (CNC) as of September 30, 2000. This information was reported in the Medicare Trust Fund Treasury Report on Receivables Due from the Public (TROR), which contained the most recent agency-certified information available during our review. Debts classified as CNC are written off the books for accounting purposes—that is, they are no longer carried as receivables. A write-off does not extinguish the underlying liability for a debt, and collection actions may continue to be taken on debts classified as CNC. Of the $6.6 billion of Medicare debts reported as more than 180 days delinquent or classified as CNC, CMS reported that it had referred approximately $2 billion of debts and had excluded from referral approximately $1.8 billion of debts. CMS also reported in the TROR that about $1.6 billion in unfiled cost reports were delinquent more than 180 days. Because CMS does not recognize amounts associated with unfiled costs reports as receivables for financial reporting purposes, the agency reports unfiled cost report debts more than 180 days delinquent as a separate, additional item in the TROR. With these exclusions and additions, CMS reported about $6.4 billion of Medicare debts eligible for referral to PSC for collection action as of September 30, 2000. Of the approximately $6.4 billion of Medicare debts that CMS had reported as eligible for referral by the end of fiscal year 2000, the agency reported that about $4.3 billion of the debts had not been referred to Treasury or a Treasury-designated debt collection center. About $2.6 billion of the unreferred amount was non-MSP debt, and the remainder was MSP debt. CMS’s goal for fiscal year 2001, which the agency met, was to refer an additional $2 billion of unreferred eligible debts. CMS’s goal for fiscal year 2002 is to refer the remainder of eligible Medicare debts. Our objectives were to determine whether (1) CMS was promptly referring eligible Medicare debts for collection action, (2) any obstacles were hampering CMS from referring eligible Medicare debts, and (3) CMS was appropriately using exclusions from referral requirements. Although CMS also administers Medicaid and the State Children’s Health Insurance Program, we limited our review to Medicare debts because the Medicare program is the source of the vast majority of CMS’s reported delinquent debt. To address our objectives, we obtained and analyzed the Medicare Trust Fund TROR for the fourth quarter of fiscal year 2000, which was the most recent agency-certified report available at the completion of our fieldwork, and other financial reports prepared by CMS. The most recent year-end TROR should contain the most reliable information available because Treasury requires that agency chief financial officers (or their designees) certify year-end data as accurate. We interviewed CMS and PSC officials to obtain an understanding of the debt-referral process and any obstacles that may be hampering referral of eligible debts. In addition, we reviewed CMS policies and procedures on debt referrals and examined current and planned CMS efforts to refer eligible delinquent debts. We also met with representatives from 4 selected CMS contractors that process and pay Medicare claims, and we discussed how they identified and referred eligible Medicare debts to PSC. At the time of our review, CMS had 55 Medicare contractors that processed claims and collected on overpayments. We used two criteria to select the 4 contractors: (1) the size of their debt portfolio and (2) whether the contractor participated in the CMS pilot projects. Specifically, 1 of the selected contractors had the largest amount of debt overall and the largest amount of Part A debt, 1 other selected contractor had the largest amount of Part B debt, and another of the selected contractors had the largest amount of MSP debt. We selected the fourth contractor to ensure that our review covered at least one-third of all the debt maintained at the CMS contractors. Three of the 4 contractors that we selected participated in the MSP pilot project, and 2 participated in the non-MSP pilot project. As agreed with your office, we did not test selected debts that were excluded from referral because the HHS OIG was performing detailed testing of CMS’s implementation of DCIA and the effectiveness of its debt collection and debt management activities. As part of its work, the OIG tested selected debts at CMS and its Medicare contractors to determine whether the status of debts had been appropriately categorized. We also did not independently verify the reliability of certain information that CMS and PSC provided (e.g., debts reported as more than 180 days delinquent). We performed our work from November 2000 to September 2001 in accordance with U.S. generally accepted government auditing standards. We requested written comments on a draft of this report from the administrator of CMS or his designated representative. CMS’s letter is reprinted in appendix I. We also considered, but did not reprint, the technical comments provided with CMS’s letter and have incorporated them throughout this report, where appropriate. Overall, CMS did not promptly refer all of its reported eligible Medicare debts in fiscal year 2001. Although CMS referred approximately $2.1 billion of Medicare debts during the year, almost all were non-MSP debts primarily related to cost report audits. Further, the vast majority of these debt referrals—about $1.9 billion—occurred late in the fiscal year, from June through September. While approximately $1.8 billion of eligible MSP debts were reported as eligible for referral as of September 30, 2000, CMS referred only about $47 million of MSP debts in fiscal year 2001. CMS made progress in referring non-MSP debts to PSC during fiscal year 2001, but most of the progress occurred late in the fiscal year. Problems with the debt-referral system contributed to the late referral of non-MSP debts. Although CMS reached its $2 billion referral goal for fiscal year 2001, both the prospects for collection during the year and the collectibility of the debts were likely diminished by the referral delays. At the end of fiscal year 2000, about $2.6 billion of non-MSP debts remained to be referred. Throughout most of fiscal year 2001, CMS made little progress in referring these debts. It was not until June 2001, approximately two-thirds of the way through the fiscal year, that CMS began making substantial referrals of non-MSP debts to PSC. Of the approximately $2.1 billion of non-MSP debts reported as being referred during fiscal year 2001, CMS referred about $1.9 billion of the debts from June through September. CMS officials stated that they were not significantly concerned by the low level of non-MSP debt referrals during the first two-thirds of fiscal year 2001 because they met their goal of referring $2 billion of eligible Medicare debts in fiscal year 2001 and they intend to meet their goal of referring the remaining eligible debts by the end of fiscal year 2002. However, the prompt referral of delinquent debts is critical because, as industry statistics indicate, the likelihood of recovering amounts owed on delinquent debts decreases dramatically as the age of the debt increases. CMS made little progress in referring the approximately $1.8 billion of MSP debts that were reported as eligible for referral as of September 30, 2000. Limited contractor efforts, coupled with inadequate monitoring of contractor performance by CMS, contributed to the slow progress. In addition, many existing MSP debts will never be referred because in February 2001 CMS instructed its Medicare contractors to close out MSP debts delinquent more than 6 years and 3 months, thereby terminating all collection efforts on such debts. Unreferred MSP debts represented about 40 percent of the approximately $4.3 billion of reported eligible Medicare debts that had not been referred for collection as of September 30, 2000. PSC collection reports show that the center has had comparatively more success in collecting MSP debts than it has had in collecting non-MSP debts. By the end of fiscal year 2001, PSC reported collecting almost as much on delinquent MSP debts as on delinquent non-MSP debts, even though the total dollar amount of MSP referrals was a small fraction, about 2 percent, of the total dollar amount of non-MSP referrals. CMS began referring MSP debts to PSC in March 2000. PSC records indicate that through September 30, 2001, CMS had referred only about $83 million, or 5 percent, of the approximately $1.8 billion of MSP debts eligible for referral to PSC as of September 30, 2000. Of this amount, about $47 million was referred in fiscal year 2001. These limited referrals were likely the only collection action taken on most of the eligible MSP debts from March 2000 through September 2001. In most cases, CMS instructed its contractors only to send initial demand letters to MSP debtors and follow up on any resulting inquiries. CMS did not establish and implement effective controls to promptly refer eligible Medicare debts to PSC for collection action. CMS failed to promptly refer non-MSP debts because the agency had problems with its debt-referral system. Limited contractor efforts, coupled with inadequate CMS monitoring of contractor performance, were primarily responsible for the slow progress in referring MSP debts. Because of a CMS policy to close out debts delinquent more than 6 years and 3 months, some debts will never be referred for collection action. In addition, CMS has not developed a process to report closed-out debts to IRS, even though discharged debt is considered income and may be taxable. Non-MSP debt referrals were delayed until late in fiscal year 2001 primarily because CMS suspended its debt-referral system in November 2000. According to a CMS official responsible for non-MSP debt referrals, the agency suspended the system in order to identify and correct numerous discrepancies found in the system’s data (e.g., duplicate debt entries, inconsistencies between debt amounts in the referral system and debt amounts in the tracking system) and to place additional edits in the system to prevent such errors in the future. CMS did not resume referring non- MSP debts to PSC through the debt-referral system until June 2001. Not only did CMS’s suspension of the debt-referral system limit the debt- referral activities of the 5 contractors participating in the non-MSP pilot, it also delayed CMS’s planned October 2000 expansion of the debt-referral program to all contractors. CMS did not issue updated instructions for referring non-MSP debts to each of its 55 contractors until April 2001. The guidance, revised in response to our September 2000 recommendation that all CMS debt be transferred to PSC as soon as it becomes delinquent and is eligible for transfer, expanded the criteria for referring non-MSP debts by including Part B debts, as well as Part A debts, and lowering the referral threshold from $600 to $25. After the debt-referral system began operating again and the referral requirements were expanded and extended to all contractors, CMS increased its referrals of non-MSP debts to PSC by about $1.9 billion from June through September 2001. The low referral of MSP debts in fiscal year 2001 occurred partly because for most of the year, until May 2001, only the 15 contractors participating in the pilot project were authorized to identify eligible Part A debts and refer them to PSC. According to information from CMS, as of September 30, 2000, these 15 contractors held a total of about $542 million of Part A debts that were more than 180 days delinquent, representing about 31 percent of MSP debts eligible for referral as of that date. In response to our September 2000 recommendation, CMS issued a program memorandum in May 2001 extending to all MSP contractors the requirement to identify delinquent MSP debts and refer them to PSC. CMS also expanded the referral criteria to include Part B debts, as well as Part A debts. The dollar threshold for referral is to be reduced in phases, from $5,000 to $25. The phased reduction is intended both to eliminate the backlog of higher-dollar debts and to ensure referral of current debts, thereby avoiding a continuing backlog. A CMS official stated that the memorandum was not issued sooner partly because CMS had to respond to contractors’ concerns that they needed additional funding to automate their debt-referral processes to comply with the new referral requirements. The CMS official stated that after much consideration, CMS concluded that referrals could be performed manually and that seeking additional funding for automation would likely cause further delays in referring MSP debts to PSC. Another factor that contributed to the low amount of MSP debt referred to PSC was the failure of certain pilot project contractors to promptly refer eligible debts. Under the MSP pilot project, contractors were required to identify eligible Part A debts, send DCIA intent letters (which state CMS’s intention to refer a debt for collection action if it is not paid within 60 days) to those debtors, and enter the debt information into the debt-referral system. We selected and reviewed the work of 3 large Medicare contractors that participated in the MSP pilot project and found that none of the 3 promptly identified and referred all eligible MSP debts. One of the contractors held $255 million of Part A MSP debt more than 180 days delinquent as of September 30, 2000. As of May 2001, the contractor reported that it had identified and sent out DCIA intent letters for only about $33 million, or about 13 percent, of the debt. The contractor official responsible for MSP debts stated that the contractor was under the impression that the pilot project required it to make only two file queries, in February 2000, to identify eligible debts and that the queries were to cover only debts incurred from March 1997 through August 1998. However, our review of the implementing instructions for the pilot project found that it was to cover all MSP debts that were not more than 6 years old, and CMS officials responsible for MSP debts advised us that they had never instructed the contractor to limit its file queries. Another of the 3 contractors whose work we reviewed held about $61 million of Part A MSP debt delinquent more than 180 days as of September 30, 2000. The contractor official responsible for MSP debts stated that the contractor believed that the MSP pilot project had ended in August 2000. As such, from September 2000 through December 2000, the contractor did not review its debt portfolio to identify additional MSP debts eligible for referral. The contractor subsequently began identifying and referring debts again in January 2001. In addition, the contractor’s records indicated that as of April 2001, about $6.2 million, or 48 percent, of the $12.8 million of debt for which it had sent DCIA intent letters prior to September 2000 had not been referred to PSC. These debts remained at the contractor even though they were well beyond the 60-day time frame CMS specified for referring debts to PSC after a DCIA intent letter is sent. The responsible contractor official was unable to explain why the debts had not been referred for collection action. Before our review, CMS had not developed or implemented policies and procedures for monitoring contractors’ referral of MSP debts. As a result, CMS did not monitor the extent to which contractors referred specific MSP debts to PSC and did not identify specific contractors, such as those mentioned above, that failed to identify and refer all eligible debts. Without such monitoring, CMS could not take prompt corrective action. This lack of procedures for monitoring contractors and the resulting lack of monitoring are inconsistent with the comptroller general’s Standards for Internal Control in the Federal Government. The standards state that internal controls should be designed to assure that ongoing monitoring occurs in the course of normal operations and that it should be performed continually and ingrained in agency operations. In response to our work, CMS officials stated that in June 2001 they had begun to review selected contractors’ MSP debt referrals. A CMS official said that the 10 CMS regional offices would assume a more active role in ensuring that contractors promptly refer eligible MSP debts to PSC. As of September 2001, CMS had not developed formal written procedures for monitoring contractors, but agency officials stated that they planned to develop such procedures. Many MSP debts will never be referred to PSC because of a CMS decision to close out older MSP debts. In February 2001, CMS issued guidance to its contractors directing them to methodically terminate collection action on or close out MSP debts delinquent more than 6 years and 3 months. CMS officials stated that the agency selected this delinquency criterion because the statute of limitations prevents the Department of Justice from litigating to collect debts more than 6 years after they become delinquent. Also, these debts, because they are closed out, will never be reported to FMS for TOP, which has been FMS’s most effective debt collection tool. For fiscal year 2000, Treasury found that the collection rate for the small amount of MSP debt that had been reported to TOP was about 10.5 percent, which is higher than TOP’s average collection rate. The February 2001 guidance was a continuation of CMS policy set forth in the agency’s instructions to contractors at the start of the MSP pilot project in fiscal year 2000, which authorized contractors to identify and refer only debts up to 6 years old. A CMS official stated that older MSP debts were closed out because it was not cost-effective to collect them. However, CMS could not provide any documentation to support the assertion that it is not cost-effective to attempt to collect older MSP debts, and CMS did not test this assumption in its MSP pilot project. Age alone is not an appropriate criterion for terminating collection action on a debt. The agency should pursue all appropriate means of collection on a debt and determine, based on the results of the collection activity, whether the debt is uncollectible. According to discussions with contractor officials, collection activity prior to the termination of the debts likely involved only the issuance of demand letters, as required by CMS’s Budget and Performance Requirements for contractors. The CMS official said she was not aware of any assessment performed to determine the total dollar amount of debts that will be designated as eligible for close-out because of this age threshold. During our review, CMS had already approved close-out of about $86 million of MSP debts at the contractors we visited. About $85 million of these debts were less than 10 years old and therefore could have been referred to PSC for collection action, including reporting to TOP. In a related matter, CMS has not established a process, including providing authorization to PSC, to report closed-out MSP debts to IRS. The Federal Claims Collection Standards and Office of Management and Budget (OMB) Circular No. A-129 require that agencies, in most cases, report closed-out debt amounts to IRS as income to the debtor, since those amounts represent forgiven debt, which is considered income and therefore may be taxable at the debtor’s current tax rate. Thus, reporting the discharge of indebtedness to IRS may benefit the federal government, through increased income tax collections. CMS stated that agency officials and the CMS Office of General Counsel are discussing the reporting of closed-out MSP debts to IRS but did not specify when actions, if any, would be taken to report such debts to IRS. Even with CMS’s non-MSP debt-referral system operating again and its MSP and non-MSP referral requirements extended to all of its contractors, the agency still faces obstacles to effectively managing its Medicare debt referrals. As mentioned earlier, in fiscal year 2001 CMS expanded debt- referral requirements from the pilot projects to include all 55 Medicare contractors. CMS lacks complete and accurate debt information, however, and this shortcoming will likely hamper the agency’s ability to adequately monitor contractors’ debt referrals. In addition, CMS’s referral instructions to contractors currently do not cover some types of Medicare debts, including MSP liability debts. Without a comprehensive plan in place that covers all types of Medicare debts, CMS faces significant challenges to be able to achieve its goal of referring all eligible Medicare debts by the end of fiscal year 2002. All Medicare contractors are now responsible for identifying eligible debts from their debt portfolio, sending out DCIA intent letters to debtors, and referring eligible debts to PSC. To help ensure that all eligible Medicare debts are promptly identified and referred for collection, CMS must monitor contractors’ debt-referral practices. To monitor effectively, the agency needs comprehensive, reliable debt information from its contractors, but CMS systems currently do not contain complete and accurate information on all CMS Medicare debts. One of CMS’s most daunting financial management challenges continues to be the lack of a financial management system that fully integrates CMS’s accounting systems with those of its Medicare contractors. Because CMS does not have a fully integrated accounting system, each MSP debt is maintained only in the internal system of the specific contractor that holds the debt. CMS has no centralized database that includes all MSP debts held by contractors. As a result, the agency cannot effectively monitor the extent to which its various contractors are promptly identifying eligible MSP debts and referring them to PSC for collection. CMS is developing a system that is to include a database containing all MSP debts. However, the agency plans to phase the system in, and it is not scheduled to be fully implemented at all contractors until the end of fiscal year 2006. CMS has two debt-tracking systems for its non-MSP debts, one for Part A debts and one for Part B debts. Medicare contractors are responsible for entering non-MSP debts into the systems and updating the debts’ status (with respect to bankruptcy, appeals, etc.) as appropriate. According to CMS officials, the agency intends to use these systems to monitor contractors to ensure that they are promptly identifying and referring eligible debts to PSC. Accurate tracking information is critical for monitoring debt-referral practices. CMS found, however, that its non-MSP debt-tracking systems contain inaccurate information because a significant number of contractors have not been adequately updating information in the systems. CMS performed contractor performance evaluations for fiscal year 2000 on 25 contractors and found that 19 were not adequately updating information in the non-MSP debt-tracking systems. For 5 of the 19 contractors, CMS considered the problems to be significant enough to require the contractors to develop written performance improvement plans. Our work at the 2 selected contractors involved in the non-MSP pilot project corroborated CMS’s own findings. CMS periodically sent non-MSP pilot contractors a list of eligible Part A debts from the agency’s debt- tracking system for possible referral to PSC. For the 2 non-MSP contractors we reviewed, CMS selected $1.3 billion of debts from the Part A non-MSP debt-tracking system. The contractors determined that $289 million of the debts, or about 23 percent, were actually ineligible for referral because they were in bankruptcy, under appeal, or under investigation for fraud. In addition, we identified $21 million of debts that 1 of the 2 non-MSP pilot contractors had misclassified on the CMS debt- tracking system as bankruptcy debt and ineligible for referral. These debts had actually been dismissed from the bankruptcy proceedings and therefore should have been reported in the debt-tracking system as eligible for referral. In this case, the contractor had not updated its own internal system for $8 million of the debts and was therefore not pursuing postdismissal collection actions on them. For the remaining $13 million, the contractor had updated its internal system and was pursuing collection but had failed to properly update the CMS debt-tracking system. To effectively monitor contractor performance, CMS must have the ability to determine whether contractors are referring debts promptly. However, CMS’s non-MSP debt-tracking systems lack the capacity to indicate whether contractors are promptly entering non-MSP debts into the debt- referral system after they mail DCIA intent letters because the systems do not track the date of status code changes (e.g., the date when the DCIA letter was issued). We found that CMS’s non-MSP debt-tracking system for Part A debts did not identify $5.2 million of debts that had been pending referral for at least 9 months at one of the two non-MSP contractors that we reviewed. In response to our work, CMS officials stated that they are in the process of modifying the non-MSP debt-tracking systems to allow the agency to monitor how promptly contractors are referring debts in the future. CMS has not developed a comprehensive plan that covers all types of Medicare debt eligible for referral. The agency lacks information on the total dollar amount of eligible debts not covered by its current referral instructions to the Medicare contractors, and it has not developed a detailed plan or specific time frame for referring these debts. Without a comprehensive plan in place, CMS faces significant challenges to be able to achieve its goal of referring 100 percent of eligible debts in fiscal year 2002. Types of debt for which CMS has not yet established a referral plan include, but are not limited to, the following: MSP liability. MSP liability debts arise when Medicare covers expenses related to accidents, malpractice, workers’ compensation, or other items not associated with group health plans that are subsequently determined to be the responsibility of another payer. Part A claims adjustments. Part A claims receivables are created when previously paid claims are adjusted. Reasons for claims adjustments include duplicate processing of charges or claims, payment for items or services not covered by Medicare, and incorrect billing. The CMS debt- tracking system does not track these debts. Debts resulting from claims adjustments are generally offset from subsequent Medicare payments and require no further collection action. Should subsequent Medicare payments be unavailable for offset, however, no requirements exist for Medicare contractors to perform any other collection actions, such as issuing a demand letter. We found that as of September 30, 2000, the four contractors we reviewed held about $9.6 million of MSP liability debts and about $10.7 million of debts related to Part A claims adjustments. CMS officials stated that the agency intends to refer both types of debt to PSC in the future. The amounts of eligible debt CMS reported in the September 30, 2000, Medicare Trust Fund TROR were not reliable. CMS did not properly report the delinquency aging for certain debts, including debts previously transferred to regional offices for collection. CMS also did not properly report its exclusions from referral requirements. For example, the agency inappropriately reported as excluded $149 million of non-MSP debts that had been referred to CMS regional offices for collection.In addition, CMS did not report any exclusion amounts for MSP debts, even though we noted that certain MSP debts were involved in litigation, or for non-MSP debts under investigation for fraud. Finally, because of a data-entry error, CMS inadvertently overstated debt referrals by $67 million. It is imperative that CMS provide Treasury with reliable information on eligible Medicare debt. Treasury uses the information to monitor agencies’ implementation of DCIA. In addition, the TROR is Treasury’s only comprehensive means of periodically collecting data on the status and condition of the federal government’s nontax debt portfolio, as required by the Debt Collection Act of 1982 and DCIA. CMS’s delinquent Medicare debts represent a significant portion of delinquent debts governmentwide. Therefore, they must be reported accurately if governmentwide debt information is to be useful to the president, the Congress, and OMB in determining the direction of federal debt management and credit policy. According to CMS officials, the agency is revising its method for determining eligible debt amounts. For example, CMS officials stated that the agency no longer reports debts referred to regional offices as exclusions and is in the process of identifying and reporting exclusion amounts for MSP debts. Although CMS made progress in referring eligible Medicare debts to PSC in fiscal year 2001 and met its referral goal for the year, a substantial portion of Medicare debts—particularly MSP debts—are still not being promptly referred for collection action. Inadequate contractor monitoring, resulting partly from CMS’s debt system limitations, has contributed to the slow pace of MSP debt referrals. In addition, CMS has not begun referring certain types of eligible Medicare debts, such as MSP liability debts, and those debts will continue to age until CMS completes and implements a comprehensive referral plan. Since recovery rates decrease dramatically as debts age, CMS cannot accomplish DCIA’s purpose of maximizing collection of federal nontax debt unless it refers eligible debts promptly. CMS’s policy of closing out eligible MSP debts solely on the basis of their age, without performing a quantitative study to determine whether collection action would be cost-effective, has also reduced referrals and eliminated opportunities for potential collections on those debts. In addition, by not reporting closed-out debts to IRS, the federal government may be missing an opportunity to increase government receipts. Medicare debts are a significant share of delinquent debt governmentwide, and CMS’s inaccurate reporting to Treasury on exclusion amounts, debt aging, and referrals may distort governmentwide debt information used to determine the direction of federal debt management and credit policy. CMS’s inaccurate reporting of eligible debt amounts also impedes Treasury’s ability to monitor the agency’s compliance with DCIA. To help ensure that CMS promptly refers all eligible delinquent Medicare debts to PSC, as we recommended in September 2000, and that all benefits from closed-out debts are realized, we recommend that the administrator of CMS establish and implement policies and procedures to monitor contractors’ implementation of CMS’s May 2001 instructions to ensure the prompt referral of eligible MSP debts; implement changes to CMS’s non-MSP debt tracking systems so that CMS personnel will be better able to monitor contractors’ referral of eligible non-MSP debts as required by CMS’s April 2001 instructions to contractors; develop and implement a comprehensive referral plan for all eligible delinquent Medicare debts that includes time frames for promptly referring all types of debts, including MSP liability and Part A claims adjustments debts; perform an assessment of MSP debts being closed out because they are more than 6 years and 3 months delinquent to determine whether to pursue collection action on the debts, and document the results of the assessment; establish and implement policies and procedures for reporting closed- out Medicare debts, when appropriate, to IRS; and validate the accuracy of debt-eligible amounts reported in the Medicare Trust Fund TROR by establishing a process that ensures, among other things, (1) accurate reporting of the aging of certain delinquent debts, (2) accurate and complete reporting of debts excluded from referral requirements, and (3) verification of data entry for referral amounts. In written comments on a draft of this report, CMS agreed with five of our six recommendations and summarized actions taken or planned to address those five. CMS expressed confidence that it would attain its goal of referring all eligible debt to Treasury by year-end as part of its overall financial plan. Regarding our recommendation to assess closed-out MSP debts that were more than 6 years and 3 months delinquent to determine whether to pursue collection action on them, CMS stated that further collection efforts would not be cost-effective. According to CMS, medical services at issue in these MSP debts are typically from the early 1990s and often involve Medicare services from the mid- to late 1980s. CMS indicated that the costs of validating the debts and the costs and fees associated with DCIA cross- servicing and TOP were too great to justify additional collection efforts. However, as we stated in the report, CMS could not provide any documentation to support its position that it is not cost-effective to attempt to collect older MSP debts, and CMS did not test this assumption in its MSP pilot project. CMS’s efforts to collect this debt prior to close-out were not adequate. The Federal Claims Collection Standards require that before terminating collection activity, agencies are to pursue all appropriate means of collection and determine, based on the results of the collection activity, that the debt is uncollectible. According to discussions with Medicare contractor officials, the collection activity for many of these MSP debts was limited to issuance of demand letters, which does not satisfy the requirement that all appropriate means of collection action be pursued on debts. In addition, most of the closed-out MSP debts at the Medicare contractors we visited were less than 10 years delinquent and therefore could have been referred to PSC for collection action, including reporting to TOP. As such, we continue to believe that CMS should assess MSP debt to determine whether additional collection activity is appropriate in light of the minimal prior collection activity. As agreed with your office, unless you announce its contents earlier, we plan no further distribution of this report until 30 days after its issuance date. At that time, we will send copies to the chairmen and ranking minority members of the Senate Committee on Governmental Affairs and the House Committee on Government Reform and to the ranking minority member of your subcommittee. We will also provide copies to the secretary of health and human services, the inspector general of health and human services, the administrator of the Centers for Medicare & Medicaid Services, and the secretary of the treasury. We will then make copies available to others upon request. If you have any questions about this report, please contact me at (202) 512- 3406 or Kenneth Rupar, assistant director, at (214) 777-5600. Additional key contributors to this assignment were Matthew Valenta and Tanisha Stewart. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full-text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO E-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily e-mail alert for newly released products” under the GAO Reports heading. Web site: www.gao.gov/fraudnet/fraudnet.htm, E-mail: [email protected], or 1-800-424-5454 or (202) 512-7470 (automated answering system). | The Debt Collection Improvement Act (DCIA) of 1996 requires that agencies refer eligible debts delinquent more than 180 days that they have been unable to collect to the Department of the Treasury for payment and offset and to Treasury or a Treasury-designated debt collection center for cross-servicing. The Centers for Medicare and Medicaid Services (CMS) made progress in referring eligible delinquent debts for collection during fiscal year 2001. Much of the referral volume was late in the year, however, and substantial unreferred balances remained at the end of the fiscal year. Inadequate procedures and controls hampered prompt identification and referral of both eligible non-Medicare Secondary Payer (MSP) and MSP debts. The delayed referral of non-MSP debts resulted from problems with the CMS debt-referral system and insufficient CMS monitoring of contractor referrals. The low level of MSP debt referrals resulted primarily from limited contractor efforts and insufficient CMS monitoring of contractor performance. Although GAO did not test whether selected CMS debts had been reasonably excluded from referral and reached no overall conclusion about the appropriateness of CMS exclusions, GAO found that CMS did not report reliable Medicare debt information to the Treasury Department as of September 30, 2000. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The company formation process is governed and executed at the state level. Formation documents are generally filed with a secretary of state’s office and are commonly called articles of incorporation (for corporations) or articles of organization (for LLCs). These documents, which set out the basic terms governing the company’s existence, are matters of public record. According to our survey results, in 2004, 869,693 corporations and 1,068,989 LLCs were formed in the United States. See appendix I for information on the numbers of corporations and LLCs formed in each state. Appendix II includes information on states’ company formation processing times and fees. Although specific requirements vary, states require minimal information on formation documents. Generally, the formation documents, or articles, must give the company’s name, an address where official notices can be sent, share information (for corporations), and the names and signatures of the persons incorporating. States may also ask for a statement on the purpose of the company and a principal office address on the articles. Most states also require companies to file periodic reports to remain active. These reports are generally filed either annually or biennially. Although individuals may submit their own company filing documents, third-party agents may also play a role in the process. Third-party agents include both company formation agents, who file the required documents with a state on behalf of individuals or their representatives, and agents for service of process, who receive legal and tax documents on behalf of a company. Agents can be individuals or companies operating in one state or nationally. They may have only a few clients or thousands of clients. As a result, the incorporator or organizer listed on a company’s formation documents may be the agent who is forming the company on behalf of the owners or an individual affiliated with the company being formed. Businesses may be incorporated or unincorporated. A corporation is a legal entity that exists independently of its shareholders—that is, its owners or investors—and that limits their liability for business debts and obligations and protects their personal assets. Management may include officers—chief executive officers, secretaries, and treasurers—who help direct a corporation’s day-to-day operations. LLCs are unincorporated businesses whose members are considered the owners, and either members acting as managers or outside managers hired by the company take responsibility for making decisions. Beneficial owners of corporations or LLCs are the individuals who ultimately own and control the business entity. Our survey revealed that most states do not collect information on company ownership (see fig. 1). No state collects ownership information on formation documents for corporations, and only four—Alabama, Arizona, Connecticut, and New Hampshire—request some ownership information on LLCs. Most states require corporations and LLCs to file periodic reports, but these reports generally do not include ownership information. Three states (Alaska, Arizona, and Maine) require in certain cases the name of at least one owner on periodic reports from corporations, and five states require companies to list at least one member on periodic reports from LLCs. However, if an LLC has members that are acting as managers of the company (managing members), ownership information may be available on the formation documents or periodic reports in states that require manager information to be listed. States usually do not require information on company management in the formation documents, but most states require this information on periodic reports (see fig. 2). Less than half of the states require the names and addresses of company management on company formation documents. Two states require some information on officers on company formation documents, and 10 require some information on directors. However, individuals named as directors may be nominee directors who act only as instructed by the beneficial owner. For LLCs, 19 states require some information on the managers or managing members on formation documents. Most states require the names and addresses of corporate officers and directors and of managers of LLCs on periodic reports. For corporations, 47 states require some information about the corporate officers, and 38 states require some information on directors on periodic reports. For LLCs, 28 states require some information about managers or managing members on the periodic reports. In addition to states, third-party agents may also have an opportunity to collect ownership or management information when a company is formed. Third-party agents we spoke with generally said that beyond contact information for billing the company and for forwarding legal and tax documents, they collect only the information states require for company formation documents or periodic reports. Several agents told us that they rarely collected information on ownership because the states do not require it. Further, one agent said it was not necessary to doing the job. In general, agents said that they also collected only the management information that states required. However, if they were serving as the incorporator, agents would need to collect the names of managers in order to officially pass on the authority to conduct business to the new company principals. A few agents said that even when they collected information on company ownership and management, they might not keep records of it, in part because company documents filed with the state are part of the public record. One agent said that he did not need to bear the additional cost of storing such information. According to our survey, states do not verify the identities of the individuals listed on the formation documents or screen names using federal criminal records or watch lists. Nearly all of the states reported that they review filings for the required information, fees, and availability of the proposed company name. Many states also reported that they review filings to ensure compliance with state laws, and a few states reported that they direct staff to look for suspicious activity or fraud in company filings. However, most states reported they did not have the investigative authority to take action if they identified suspicious information. For example, if something appeared especially unusual, two state officials said that they referred the issue to state or local law enforcement or the Department of Homeland Security. While states do not verify the identities of individuals listed on company formation documents, 10 states reported having the authority to assess penalties for providing false information on their company formation documents. One state official provided an example of a case in which state law enforcement officials charged two individuals with, among other things, perjury for providing false information about an agent on articles of incorporation. In addition, our survey shows that states do not require agents to verify the information collected from their clients. Most states have basic requirements for agents for service of process, but overall states exercise limited oversight of agents. Most states indicated on our survey that agents for service of process must meet certain requirements, such as having a physical address in the state or being a state resident. However, a couple of states have registration requirements for agents operating within their boundaries. Under a law that was enacted after some agents gave false addresses for their offices, Wyoming requires agents serving more than five corporations to register with the state annually. California law requires any corporation serving as an agent for service of process to file a certificate with the Secretary of State’s office and to list the California address where process can be served and the name of each employee authorized to accept process. Delaware has a contractual relationship with approximately 40 agents that allows them, for a fee and under set guidelines, access to the state’s database to enter or find company information. Agents we interviewed said that since states do not require them to, they generally do not verify or screen names against watch lists or require picture identification of company officials. One agent said that his firm generally relied on the information that it received and in general did not feel a need to question the information. However, we found a few exceptions. One agent collected a federal tax identification number (TIN), company ownership information, and individual identification and citizenship status from clients from unfamiliar countries. Another agent we interviewed required detailed information on company principals, certified copies of their passports, proof of address, and a reference letter from a bank from certain international clients. A few agents said that they used the Office of Foreign Assets Control (OFAC) list to screen names on formation documents or on other documents required for other services provided by their company. The agents said they took these additional steps for different reasons. One agent wanted to protect the agency, while other agents said that the Delaware Secretary of State encouraged using the OFAC list to screen names. One agent felt the additional requirements were not burdensome. However, some agents found the OFAC list difficult to use and saw using it as a potentially costly endeavor. OFAC officials told us that they had also heard similar concerns from agents. Law enforcement officials and others have indicated that shell companies have become popular tools for facilitating criminal activity, particularly laundering money. A December 2005 report issued by several federal agencies, including the Departments of Homeland Security, Justice, and the Treasury, analyzed the role shell companies may play in laundering money in the United States. Shell companies can aid criminals in conducting illegal activities by providing an appearance of legitimacy and may provide access to the U.S. financial system through correspondent bank accounts. For example, the Financial Crimes Enforcement Network (FinCEN) found in a December 2005 enforcement action that the New York branch of ABM AMRO, a banking institution, did not have an adequate anti-money laundering program and had failed to monitor approximately 20,000 funds transfers—with an aggregate value of approximately $3.2 billion—involving the accounts of U.S. shell companies and institutions in Russia or other former republics of the Soviet Union. But determining the extent of the criminal use of U.S. shell companies is difficult. Shell companies are not tracked by law enforcement agencies because simply forming them is not a crime. However, law enforcement officials told us that information they had seen suggested that U.S. shell companies were increasingly being used for illicit activities. For example, FinCEN officials told us they had seen many suspicious activity reports (SAR) filed by financial institutions that potentially implicated U.S. shell companies. One report cited hundreds of SARs filed between April 1996 and January 2006 that involved shell companies and resulted in almost $4 billion in activity. During investigations of suspicious activity, law enforcement officials may obtain some company information from agents or states, either from state’s Internet sites or by requesting copies of filings. According to some law enforcement officials we spoke with, information on the forms, such as the names and addresses of officers and directors, might provide productive leads, even without explicit ownership information. Law enforcement officials also sometimes obtain additional company information, such as contact addresses and methods of payment, from agents, although one state law enforcement official said the agents might tell their clients about the investigation. In some cases, the actual owners may include their personal information on official documents. For example, in an IRS case a man in Texas used numerous identities and corporations formed in Delaware, Nevada, and Texas to sell or license a new software program to investment groups. He received about $12.5 million from investors but never delivered the product to any of the groups. The man used the corporations to hide his identity, provide a legitimate face to his fraudulent activities, and open bank accounts to launder the investors’ money. IRS investigators found from state documents that he had incorporated the companies himself and often included his coconspirators as officers or directors. The man was sentenced to 40 years in prison. In other cases, law enforcement officials may have evidence of a crime but may not be able to connect an individual to the criminal action without ownership information. For example, an Arizona law enforcement official who was helping to investigate an environmental spill that caused $800,000 in damage said that investigators could not prove who was responsible for the damage because the suspect had created a complicated corporate structure involving multiple company formations. This case was not prosecuted because investigators could not identify critical ownership information. Most of the officials we interviewed said they had also worked on cases that reached dead ends because of the lack of ownership information. States and agents recognized the positive impacts of collecting ownership information when companies are formed. As previously noted, law enforcement investigations could benefit by knowing who owns and controls a company. In addition, a few state officials said that they could be more responsive to consumer demands for this information if it were on file. One agent suggested that requiring agents to collect more ownership information could discourage dishonest individuals from using agents and could reduce the number of unscrupulous individuals in the industry. However, state officials and agents we surveyed and interviewed indicated that collecting and verifying ownership information could have negative effects. These could include: Increased time, costs, and workloads for state offices and agents: Many states reported that the time needed to review and approve company formations would increase and said that states would incur costs for modifying forms and data systems. Further, officials said that states did not have the resources and staff did not have the skills to verify the information submitted on formation documents. Derailed business dealings: A few state and some private sector officials noted that an increase in the time and costs involved in forming a company might reduce the number of companies formed, particularly small businesses. One state official commented that such requirements would create a burden for honest business people but would not deter criminals. Lost state revenue: Some state officials and others we interviewed felt that if all state information requirements were not uniform, the states with the most stringent requirements could lose business to other states or even countries, reducing state revenues. Lost business for agents: Individuals might be more likely to form their own companies and serve as their own agents. Agents also indicated that it might be difficult to collect and verify information on company owners because they often were in contact only with law firms and not company officials during the formation process. In addition, some state officials noted that any change in requirements for obtaining or verifying information, or the fees charged for company formation, would require state legislatures to pass new legislation and grant company formation offices new authority. Further, state and private sector officials pointed out that ownership information collected at formation or on periodic reports might not be complete or up to date because it could change frequently. Finally, as noted, some states do not require periodic reports, and law enforcement officials noted that a shell company being used for illicit purposes might not file required periodic reports in any case. Law enforcement officials told us that many companies under investigation for suspected criminal activities had been dissolved by the states in which they were formed for failing to submit periodic reports. In addition, since a company can be owned by another company, the name provided may not be that of an individual, but another company. We also found that state officials, agents, and other industry experts felt that the need to access information on companies must be weighed against privacy issues. Company owners may want to maintain their privacy, in part because state statutes have traditionally permitted this privacy in part to avoid lawsuits against them in their personal capacity. Some business owners may also seek to protect personal assets through corporations and LLCs. One state law enforcement official also noted that if more information were easily available, criminals and con artists could take advantage of it and target companies for scams. Although business owners might be more willing to provide ownership information if it would not be disclosed in the public record, some state officials we interviewed said that since all information filed with their office is a matter of public record, keeping some information private would require new legislative authority. The officials added that storing new information would be a challenge because their data systems were not set up to maintain confidential information. However, a few states described procedures in which certain information could be redacted from the public record or from online databases. In our review, state officials, agents, and other experts in the field identified three other potential sources of company ownership information, but each of these sources also has drawbacks. First, company ownership information may be available in internal company documents. According to our review of state statutes, internal company documents, such as lists of shareholders for corporations, are required in all states for corporations. Also, according to industry experts, LLCs usually prepare and maintain operating agreements as well. These documents are generally not public records, but law enforcement officials can subpoena them to obtain ownership information. However, accessing these lists may be problematic, and the documents themselves might not be accurate and might not reveal the true beneficial owners of a company. In some cases, the documents may not even exist. For example, law enforcement officials said that shell companies may not prepare these documents and that U.S. officials may not have access to them if the company is located in another country. In addition, the shareholder list could include nominee shareholders and may not reflect any changes in shareholders. In states that allow bearer shares, companies may not even list the names of the shareholders. Finally, law enforcement officials may not want to request these documents in order to avoid tipping off a company about an investigation. Second, we were told that financial institutions may have ownership information on some companies. The Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism (USA PATRIOT ACT) Act of 2001 established minimum standards for financial institutions to follow when verifying the identity of their customers. For customers that are companies, this information includes the name of the company, its physical address (for instance, its principal place of business), and an identifying number such as the tax identification number. In addition, financial institutions must also develop risk-based procedures for verifying the identity of each customer. However, according to financial services industry representatives, conducting due diligence on a company absorbs time and resources, could be an added burden to an industry that is already subject to numerous regulations, and may result in losing a customer. Industry representatives also noted that ownership information might change after the account was opened and that not all companies open bank or brokerage accounts. Finally, correspondent accounts could create opportunities to hide the identities of the account holders from the banks themselves. Finally, the Internal Revenue Service was mentioned as another potential source of company ownership information for law enforcement, but IRS officials pointed to several limitations with their agency’s data. First, IRS may not have information on all companies formed. For example, not all companies are required to submit tax forms that include company ownership information. Second, IRS officials reported that the ownership information the agency collects might not be complete or up to date and the owner listed could be another company. Third, law enforcement officials could have difficulty accessing IRS taxpayer information, since access by federal and state law enforcement agencies outside of IRS investigations is restricted by law. IRS officials commented that collecting additional ownership and management information on IRS documents would provide IRS investigators with more detail, but their ability to collect and verify such information would depend on the availability of resources. In preparing our April 2006 report, we encountered a variety of legitimate concerns about the merits of collecting ownership information on companies formed in the United States. On the one hand, federal law enforcement agencies were concerned about the existing lack of information, because criminals can easily use shell companies to mask the identities of those engaged in illegal activities. From a law enforcement perspective, having more information on company ownership would make using shell companies for illicit activities harder, give investigators more information to use in pursuing the actual owners, and could improve the integrity of the company formation process in the United States. On the other hand, states and agents were concerned about increased costs, potential revenue losses, and owners’ privacy if information requirements were increased. Collecting more information and approving applications would require more time and resources, possibly reducing the number of business startups and could be considered a threat to the current system, which values the protection of privacy and individuals’ personal assets. Any requirement that states, agents, or both collect more ownership information would need to balance these conflicting concerns and be uniformly applied in all U.S. jurisdictions. Otherwise, those wanting to set up shell companies for illicit activities could simply move to the jurisdiction that presented the fewest obstacles, undermining the intent of the requirement. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions that you or other members of the committee may have at this time. For further information regarding this testimony, please contact me at (202) 512-8678 or [email protected]. Individuals making contributions to this testimony include Kay Kuhlman, Assistant Director; Emily Chalmers; Jennifer DuBord; Marc Molino; Jill Naamane; and Linda Rego. Historically, the corporation has been the dominant business form, but recently the limited liability company (LLC) has become increasingly popular. According to our survey, 8,908,519 corporations and 3,781,875 LLCs were on file nationwide in 2004. That same year, a total of 869,693 corporations and 1,068,989 LLCs were formed. Figure 3 shows the number of corporations and LLCs formed in each state in 2004. Five states— California, Delaware, Florida, New York, and Texas—were responsible for 415,011 (47.7 percent) of the corporations and 310,904 (29.1 percent) of the LLCs. Florida was the top formation state for both corporations (170,207 formed) and LLCs (100,070) in 2004. New York had the largest number of corporations on file in 2004 (862,647) and Delaware the largest number of LLCs (273,252). Data from the International Association of Commercial Administrators (IACA) show that from 2001 to 2004, the number of LLCs formed increased rapidly—by 92.3 percent—although the number of corporations formed increased only 3.6 percent. Company formation and reporting documents can be submitted in person or by mail, and many states also accept filings by fax. Review and approval times can depend on how documents are submitted. For example, a District of Columbia official told us that a formation document submitted in person could be approved in 15 minutes, but a document that was mailed might not be approved for 10 to 15 days. Most states reported that documents submitted in person or by mail were approved within 1 to 5 business days, although a few reported that the process took more than 10 days. Officials in Arizona, for example, told us that it typically took the office 60 days to approve formation documents because of the volume of filings the office received. In 36 states, company formation documents, reporting documents, or both can be submitted through electronic filing (fig. 4 shows the states that provide a Web site for filing formation documents or periodic reports). In addition, some officials indicated that they would like or were planning to offer electronic filing in the future. As shown in table 1, in many cases states charge the same or nearly the same fee for forming a corporation or an LLC. In others, such as Illinois, the fee is substantially different for the two business forms. We found that in two states, Nebraska and New Mexico, the fee for forming a corporation may fall into a range. In these cases, the actual fee charged depends on the number of shares the new corporation will have. The median company formation fee is $95, and fees for filing periodic reports range from $5 to $500. Thirty states reported offering expedited service for an additional fee. Of those, most responded that with expedited service, filings were approved either the same day or the day after an application was filed. Two states reported having several expedited service options. Nevada offers 24-hour expedited service for an additional $125 above the normal filing fees, 2- hour service for an extra $500, and 1-hour, or “while you wait,” service for an extra $1,000. Delaware offers same day service for $100, next-day service for $50, 2-hour service for $500, and 1-hour service for $1,000. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Companies, which are the basis of most commercial activities in market-based economies, may be used for illicit as well as legitimate purposes. Because companies can be used to hide activities such as money laundering, some states have been criticized for requiring too little information about companies when they are formed, especially concerning owners. This testimony draws on GAO's April 2006 report Company Formations: Minimal Ownership Information Is Collected and Available (GAO-06-376), which addressed (1) the information states and other parties collect on companies, (2) law enforcement concerns about the role of companies in illicit activities and the information available on owners, and (3) the implications of collecting more ownership information. GAO surveyed all 50 states and the District of Columbia, reviewed state laws, and interviewed a variety of industry, law enforcement, and other government officials. Most states do not require ownership information at the time a company is formed or on the annual and biennial reports most corporations and limited liability companies (LLC) must file. Four of the 50 states and the District of Columbia require some information on members (owners) of LLCs. Some states require companies to list information on directors, officers, or managers, but these persons are not always owners. Nearly all states screen company filings for statutorily required information such as the company's name and an address where official notices can be sent, but no states verify the identities of company officials. Third-party agents may submit formation documents for a company but usually collect only billing and statutorily required information and rarely verify it. Federal law enforcement officials are concerned that criminals are increasingly using U.S. "shell" companies--companies with generally no operations--to conceal their identities and illicit activities. Though the magnitude of the problem is hard to measure, officials said that such companies are increasingly involved in criminal investigations at home and abroad. The information states collect on companies has been helpful in some cases, as names on the documents can generate additional leads. But some officials said that available information was limited and that they had closed cases because the owners of a company under investigation could not be identified. State officials and agents said that collecting company ownership information could be problematic. Some noted that collecting such information could increase the cost and time involved in approving company formations. A few states and agents said that they might lose business to other states, countries, or agents that had less stringent requirements. Finally, officials and agents were concerned about compromising individuals' privacy, as information on company filings that had historically been protected would become part of the public record. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
CMS’s method of adjusting payments to MA plans to reflect beneficiary health status has changed over time. Prior to 2000, CMS adjusted MA payments based only on beneficiary demographic data. From 2000 to 2003, CMS adjusted MA payments using a model that was based on a beneficiary’s demographic characteristics and principal inpatient diagnosis. In 2004, CMS began adjusting payments to MA plans based on the CMS-HCC model.conditions, are groups of medical diagnoses where related groups of diagnoses are ranked based on disease severity and cost. The CMS- HCC model adjusts MA payments more accurately than previous models HCCs, which represent major medical because it includes more comprehensive information on beneficiaries’ health status. The CMS-HCC risk adjustment model uses enrollment and claims data from Medicare FFS. The model uses beneficiary characteristic and diagnostic data from a base year to calculate each beneficiary’s risk For example, CMS used MA beneficiary scores for the following year.demographic and diagnostic data for 2007 to determine the risk scores used to adjust payments to MA plans in 2008. CMS estimated that 3.41 percent of 2010 MA beneficiary risk scores was attributable to differences in diagnostic coding between MA and Medicare FFS since 2007. To calculate this percentage, CMS estimated the annual difference in disease score growth between MA and Medicare FFS beneficiaries for three different groups of beneficiaries who were either enrolled in the same MA plan or in Medicare FFS from 2004 to 2005, 2005 to 2006, and 2006 to 2007. CMS accounted for differences in age and mortality when estimating the difference in disease score growth between MA and Medicare FFS beneficiaries for each period. Then, CMS calculated the average of the three estimates.estimate to 2010 MA beneficiaries, CMS multiplied the average annual difference in risk score growth by its estimate of the average length of time that 2010 MA beneficiaries had been continuously enrolled in MA plans over the previous 3 years, and CMS multiplied this result by 81.8 percent, its estimate of the percentage of 2010 MA beneficiaries who were enrolled in an MA plan in 2009 and therefore were exposed to MA coding practices. CMS implemented this same adjustment of 3.41 percent in 2011 and has announced it will implement this same adjustment in 2012. We found that diagnostic coding differences exist between MA plans and Medicare FFS and that these differences had a substantial effect on payment to MA plans. We estimated that risk score growth due to coding differences over the previous 3 years was equivalent to $3.9 billion to $5.8 billion in payments to MA plans in 2010 before CMS’s adjustment for coding differences. Before CMS reduced 2010 MA beneficiary risk scores, we found that these scores were at least 4.8 percent, and perhaps as much as 7.1 percent, higher than the risk scores likely would have been as a result of diagnostic coding differences, that is, if the same beneficiaries had been continuously enrolled in FFS (see fig. 1). Our estimates suggest that, after accounting for CMS’s 3.4 percent reduction to MA risk scores in 2010, MA risk scores were too high by at least 1.4 percent, and perhaps as much as 3.7 percent, equivalent to $1.2 billion and $3.1 billion in payments to MA plans. Our two estimates were based on different assumptions of the impact of coding differences over time. We found that the annual impact of coding differences for our study population increased from 2005 to 2008. Based on this trend, we projected risk score growth for the period 2008 to 2010 and obtained the higher estimate, 7.1 percent, of the cumulative impact of differences in diagnostic coding between MA and FFS. However, coding differences may reach an upper bound when MA plans code diagnoses as comprehensively as possible, so we produced the lower estimate of 4.8 percent by assuming that the impact of coding differences on risk scores remained constant and was the same from 2008 to 2010 as it was from 2007 to 2008. Plans with networks may have greater potential to influence the diagnostic coding of their providers, relative to plans without networks. Specifically, when we restricted our analysis to MA beneficiaries in plans with provider networks (HMOs, PPOs, and plans offered by PSOs), our estimates of the cumulative effect of differences in diagnostic coding between MA and FFS increased to an average of 5.5 or 7.8 percent of MA beneficiary risk scores in 2010, depending on the projection assumption for 2008 to 2010. Altering the year by which MA coding patterns had “caught up” to FFS coding patterns, from our original assumption of 2007 to 2005, had little effect on our results. Specifically, we estimated the cumulative impact of coding differences from 2005 to 2010 and found that our estimates for all MA plans increased slightly to 5.3 or 7.6 percent, depending on the projection assumption from 2008 to 2010. Our analysis estimating the cumulative impact of coding differences on 2010 MA risk scores suggests that this cumulative impact is increasing. Specifically, we found that from 2005 to 2008, the impact of coding differences on MA risk scores increased over time (see app. 1, table 1). Furthermore, CMS also found that the impact of coding differences While we did not have more recent data, increased from 2004 to 2008. the trend of coding differences through 2008 suggests that the impact of coding differences in 2011 and 2012 could be larger than in 2010. CMS analysis provided to us showed annual risk score growth due to coding differences to be 0.015 from 2004 to 2005, 0.015 from 2005 to 2006, 0.026 from 2006 to 2007, and 0.038 from 2007 to 2008. CMS’s estimate of the impact of coding differences on 2010 MA risk scores was smaller than our estimate due to the collective impact of three methodological differences described below. For its 2011 and 2012 adjustments, the agency continued to use the same estimate of the impact of coding differences it used in 2010, which likely resulted in excess payments to MA plans. Three major differences between our and CMS’s methodology account for the differences in our 2010 estimates. First, CMS did not include data from 2008. CMS initially announced the adjustment for coding differences in its advance notice for 2010 payment before 2008 data were available. While 2008 data became available prior to the final announcement of the coding adjustment, CMS decided not to incorporate 2008 data into its final adjustment. In its announcement for 2010 payment, CMS explains that it took a conservative approach for the first year that it implemented the MA coding adjustment. Incorporating 2008 data would have increased the size of CMS’s final adjustment. Second, CMS did not take into account the increasing impact of coding differences over time. However, without 2008 data, the increasing trend of the annual impact of coding differences is less apparent, and supports the agency’s decision to use the average annual impact from 2004 to 2007 as a proxy for the annual impact from 2007 to 2010. Third, CMS only accounted for differences in age and mortality between the MA and FFS study populations. We found that accounting for additional beneficiary characteristics explained more variation in disease score growth, and consequently improved the accuracy of our risk score growth estimate. CMS did not update its estimate in 2011 and 2012 with more current data, even though data were available. CMS did not include 2008 data in its 2010 estimate due to its desire to take a conservative approach for the first year it implemented a coding adjustment, and the agency did not update its estimate for 2011 or 2012 due to concerns about the many MA payment changes taking place. While maintaining the same level of adjustment for 2011 and 2012 maintains stability and predictability in MA payment rates, it also allows the accuracy of the adjustment to diminish in each year. Including more recent data would have improved the accuracy of CMS’s 2011 and 2012 estimates because more recent data are likely to be more representative of the year in which an adjustment was made. By not updating its estimate with more current data, CMS also did not account for the additional years of cumulative coding differences in its estimate: 4 years for 2011 (2007 to 2011) and 5 years for 2012 (2007 to 2012). While CMS stated in its announcement for 2011 payment that it would consider accounting for additional years of coding differences, CMS officials told us they were concerned about incorporating additional years using a linear methodology because it would ignore the possibility that MA plans may reach a limit at which they could no longer code diagnoses more comprehensively. We think it is unlikely that this limit has been reached. Given the financial incentives that MA plans have to ensure that all relevant diagnoses are coded, the fact that CMS’s 3.41 percent estimate is below our low estimate of 4.8 percent, and considering the increasing use of electronic health records to capture and maintain diagnostic information, the upper limit is likely to be greater than the 3 years CMS accounted for in its 2011 and 2012 estimates. In addition to not including more recent data, CMS did not incorporate the impact of the upward trend in coding differences on risk scores into its estimates for 2011 and 2012. Based on the trend of increasing impact of coding differences through 2008, shown in both CMS’s and our analysis, we believe that the impact of coding differences on 2011 and 2012 MA risk scores is likely to be larger than it was on 2010 MA risk scores. In addition, less than 1.4 percent of MA enrollees in 2011 were enrolled in a plan without a network, suggesting that our slightly larger results based on only MA plans with a network are more accurate estimates of the impact of coding differences in 2011 and 2012. By continuing to implement the same 3.41 percent adjustment for coding differences in 2011 and 2012, we believe CMS likely substantially underestimated the impact of coding differences in 2011 and 2012, resulting in excess payments to MA plans. Risk adjustment is important to ensure that payments to MA plans adequately account for differences in beneficiaries’ health status and to maintain plans’ financial incentive to enroll and care for beneficiaries regardless of their health status or the resources they are likely to consume. For CMS’s risk adjustment model to adjust payments to MA plans appropriately, diagnostic coding patterns must be similar among both MA plans and Medicare FFS. We confirmed CMS’s finding that differences in diagnostic coding caused risk scores for MA beneficiaries to be higher than those for comparable Medicare FFS beneficiaries in 2010. This finding underscores the importance of continuing to adjust MA risk scores to account for coding differences and ensuring that these adjustments are as accurate as possible. If an adjustment for coding differences is too low, CMS would pay MA plans more than it would pay providers in Medicare FFS to provide health care for the same beneficiaries. We found that CMS’s 3.41 percent adjustment for coding differences in 2010 was too low, resulting in $1.2 billion to $3.1 billion in payments to MA plans for coding differences. By not updating its methodology in 2011 or in 2012, CMS likely underestimated the impact of coding differences on MA risk scores to a greater extent in these years, resulting in excess payments to MA plans. If CMS does not update its methodology, excess payments due to differences in coding practices are likely to increase. To help ensure appropriate payments to MA plans, the Administrator of CMS should take steps to improve the accuracy of the adjustment made for differences in diagnostic coding practices between MA and Medicare FFS. Such steps could include, for example, accounting for additional beneficiary characteristics, including the most current data available, identifying and accounting for all years of coding differences that could affect the payment year for which an adjustment is made, and incorporating the trend of the impact of coding differences on risk scores. CMS provided written comments on a draft of this report, which are reprinted in appendix II. In its comments, CMS stated that it found our methodological approach and findings informative and suggested that we provide some additional information about how the coding differences between MA and FFS were calculated. In response, we added additional details to appendix I about the regression models used, the calculations used to generate our cumulative impact estimates, and the trend line used to generate our high estimate. CMS did not comment on our recommendation for executive action. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of HHS, interested congressional committees, and others. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. This appendix explains the scope and methodology that we used to address our objective that determines the extent to which differences, if any, in diagnostic coding between Medicare Advantage (MA) plans and Medicare fee-for-service (FFS) affect risk scores and payments to MA plans in 2010. To determine the extent to which differences, if any, in diagnostic coding between MA plans and Medicare FFS affected MA risk scores in 2010, we used Centers for Medicare & Medicaid Services (CMS) enrollment and risk score data from 2004 to 2008, the most current data available at the time of our analysis, and projected the estimated impact to 2010. For three periods (2005 to 2006, 2006 to 2007, and 2007 to 2008), we compared actual risk score growth for beneficiaries in our MA study population with the estimated risk score growth the beneficiaries would have had if they were enrolled in Medicare FFS. Risk scores for a given calendar year are based on beneficiaries’ diagnoses in the previous year, so we identified our study population based on enrollment data for 2004 through 2007 and analyzed risk scores for that population for 2005 through 2008. Our MA study population consisted of a retrospective cohort of MA beneficiaries. We included MA beneficiaries who were enrolled in health maintenance organization (HMO), preferred provider organization (PPO), and private fee-for-service (PFFS) plans as well as plans offered by provider-sponsored organizations (PSO). Specifically, we identified the cohort of MA beneficiaries who were enrolled in MA for all of 2007 and followed them back for the length of their continuous enrollment to 2004. In addition, for beneficiaries who were enrolled in Medicare FFS and switched to MA in 2005, 2006, or 2007, we included data for 1 year of Medicare FFS enrollment immediately preceding their MA enrollment.Our MA study population included three types of beneficiaries, each of which we analyzed separately for each period: MA joiners—beneficiaries enrolled in Medicare FFS for the entire first year of each period and then enrolled in MA for all of the following year, MA plan stayers—beneficiaries enrolled in the same MA plan for the first and second year of the period, and MA plan switchers—beneficiaries enrolled in one MA plan for the first year of the period and a second MA plan in the following year. Our control population consisted of a retrospective cohort of FFS beneficiaries who were enrolled in FFS for all of 2007 and 2006. We followed these beneficiaries back to 2004 and included data for all years of continuous FFS enrollment. For both the study and control populations, we excluded data for years during which a beneficiary (1) was diagnosed with end-stage renal disease (ESRD) during the study year; (2) resided in a long-term care facility for more than 90 consecutive days; (3) died prior to July 1, 2008; (4) resided outside the 50 United States; Washington, D.C.; and Puerto Rico; or (5) moved to a new state or changed urban/rural status. We calculated the actual change in disease score—the portion of the risk score that is based on a beneficiary’s coded diagnoses—for the MA study population for the following three time periods (in payment years): 2005 to 2006, 2006 to 2007, and 2007 to 2008.disease scores that would have occurred if those MA beneficiaries were enrolled continuously in FFS, we used our control population to estimate a regression model that described how beneficiary characteristics To estimate the change in influenced change in disease score. In the regression model we used change in disease score (year 2 - year 1) as our dependent variable and included age, sex, hierarchical condition categories (HCC), HCC interaction variables, Medicaid status, and original reason for Medicare entitlement was disability as independent variables as they are specified in the CMS-HCC model. We also included one urban and one rural variable for each of the 50 United States; Washington, D.C.; and Puerto Rico as independent variables to identify beneficiary residential location. Then we used these regression models and data on beneficiary characteristics for our MA study population to estimate the change in disease scores that would have occurred if those MA beneficiaries had been continuously enrolled in FFS. We identified the difference between the actual and estimated change in disease scores as attributable to coding differences between MA and FFS because the regression model accounted for other relevant factors affecting disease score growth (see table 1). To convert these estimates of disease score growth due to coding differences into estimates of the impact of coding differences on 2010 MA risk scores, we divided the disease score growth estimates by the average MA risk score in 2010. Because 2010 risk scores were not available at the time we conducted our analysis, we calculated the average MA community risk score for the most recent data available (risk score years 2005 through 2008) and projected the trend to 2010 to estimate the average 2010 MA risk score. We projected these estimates of the annual impact of coding difference on 2010 risk scores through 2010 using two different assumptions. One projection assumed that the annual impact of coding differences on risk scores was the same from 2008 to 2010 as it was from 2007 to 2008. The other projection assumed that the trend of increasing coding difference impact over 2005 to 2008 continued through 2010 (see fig. 2). To calculate the cumulative impact of coding differences on MA risk scores for 2007 through 2010, we summed the annual impact estimates for that period and adjusted each impact estimate to account for beneficiaries who disenrolled from the MA program before 2010. The result is the cumulative impact of coding differences from 2007 to 2010 on MA risk scores in 2010.of coding differences from 2007 to 2010 on MA risk scores in 2010 for beneficiaries in MA plans with provider networks (HMOs, PPOs, and PSOs) because such plans may have a greater ability to affect provider coding patterns. We separately estimated the cumulative impact We also performed an additional analysis to determine how sensitive our results were to our assumption that coding patterns for MA and FFS were similar in 2007. CMS believes that MA coding patterns may have been less comprehensive than FFS when the CMS-HCC model was implemented, and that coding pattern differences caused MA risk scores to grow faster than FFS; therefore, there may have been a period of “catch-up” before MA coding patterns became more comprehensive than FFS coding patterns. While the length of the “catch-up” period is not known, we evaluated the impact of assuming the actual “catch-up” period was shorter, and that MA and FFS coding patterns were similar in 2005. Specifically, we evaluated the impact of analyzing two additional years of coding differences by estimating the impact of coding differences from 2005 to 2010. To quantify the impact of both our and CMS’s estimates of coding differences on payments to MA plans in 2010, we used data on MA plan bids—plans’ proposed reimbursement rates for the average beneficiary— which are used to determine payments to MA plans. We used these data to calculate total risk-adjusted payments for each MA plan before and after applying a coding adjustment, and then used the differences between these payment levels to estimate the percentage reduction in total projected payments to MA plans in 2010 resulting from adjustments for coding differences. Then we applied the percentage reduction in payments associated with each adjustment to the estimated total payments to MA plans in 2010 of $112.8 billion and accounted for reduced Medicare Part B premium payments received by CMS, which offset the reduction in MA payments (see table 2). The CMS data we analyzed on Medicare beneficiaries are collected from Medicare providers and MA plans. We assessed the reliability of the CMS data we used by interviewing officials responsible for using these data to determine MA payments, reviewing relevant documentation, and examining the data for obvious errors. We determined that the data were sufficiently reliable for the purposes of our study. In addition to the contact named above, Christine Brudevold, Assistant Director; Alison Binkowski; William Black; Andrew Johnson; Richard Lipinski; Elizabeth Morrison; and Merrile Sing made key contributions to this report. | The Centers for Medicare & Medicaid Services (CMS) pays plans in Medicare Advantage (MA)the private plan alternative to Medicare fee-for-service (FFS)a predetermined amount per beneficiary adjusted for health status. To make this adjustment, CMS calculates a risk score, a relative measure of expected health care costs, for each beneficiary. Risk scores should be the same among all beneficiaries with the same health conditions and demographic characteristics. Policymakers raised concerns that differences in diagnostic coding between MA plans and Medicare FFS could lead to inappropriately high MA risk scores and payments to MA plans. CMS began adjusting for coding differences in 2010. GAO (1) estimated the impact of any coding differences on MA risk scores and payments to plans in 2010 and (2) evaluated CMSs methodology for estimating the impact of these differences in 2010, 2011, and 2012. To do this, GAO compared risk score growth for MA beneficiaries with an estimate of what risk score growth would have been for those beneficiaries if they were in Medicare FFS, and evaluated CMSs methodology by assessing the data, study populations, study design, and beneficiary characteristics analyzed. GAO found that diagnostic coding differences exist between MA plans and Medicare FFS. Using data on beneficiary characteristics and regression analysis, GAO estimated that before CMSs adjustment, 2010 MA beneficiary risk scores were at least 4.8 percent, and perhaps as much as 7.1 percent, higher than they likely would have been if the same beneficiaries had been continuously enrolled in FFS. The higher risk scores were equivalent to $3.9 billion to $5.8 billion in payments to MA plans. Both GAO and CMS found that the impact of coding differences increased over time. This trend suggests that the cumulative impact of coding differences in 2011 and 2012 could be larger than in 2010. In contrast to GAO, CMS estimated that 3.4 percent of 2010 MA beneficiary risk scores were attributable to coding differences between MA plans and Medicare FFS. CMSs adjustment for this difference avoided $2.7 billion in excess payments to MA plans. CMSs 2010 estimate differs from GAOs in that CMSs methodology did not include more current data, did not incorporate the trend of the impact of coding differences over time, and did not account for beneficiary characteristics other than age and mortality, such as sex, health status, Medicaid enrollment status, beneficiary residential location, and whether the original reason for Medicare entitlement was disability. CMS did not update its coding adjustment estimate in 2011 and 2012 to include more current data, to account for additional years of coding differences, or to incorporate the trend of the impact of coding differences. By continuing to implement the same 3.4 percent adjustment for coding differences in 2011 and 2012, CMS likely underestimated the impact of coding differences in 2011 and 2012, resulting in excess payments to MA plans. GAOs findings underscore the importance of both CMS continuing to adjust risk scores to account for coding differences and ensuring that those adjustments are as complete and accurate as possible. In its comments, CMS stated that it found our findings informative. CMS did not comment on our recommendation. GAO recommends that CMS should improve the accuracy of its MA risk score adjustments by taking steps such as incorporating adjustments for additional beneficiary characteristics, using the most current data available, accounting for all relevant years of coding differences, and incorporating the effect of coding difference trends. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The Internal Revenue Manual (IRM) describes the desired outcome of an income tax audit as the determination of the correct taxable income and tax liability of the person or entity under audit. In making these determinations, the auditor has a responsibility to both the audited taxpayer and all other taxpayers to conduct a quality audit. IRS uses nine audit standards, which have evolved since the 1960s, to define audit quality. These standards address several issues, such as the scope, techniques, technical conclusions, reports, and time management of an audit, as well as workpaper preparation. Each standard has one or more key elements. (See table I.1 in app. I for a list of these standards and their associated key elements.) Workpapers provide documentation on the scope of the audit and the diligence with which it was completed. According to the IRM, audit workpapers (1) assist in planning the audit; (2) record the procedures applied, tests performed, and evidence gathered; (3) provide support for technical conclusions; and (4) provide the basis for review by management. Audit workpapers also provide the principal support for the auditor’s report, which is to be provided to the audited taxpayer, on findings and conclusions about the taxpayer’s correct tax liability. The primary tool used by IRS to control quality under the nine standards is the review of ongoing audit work. This review is the responsibility of IRS’ first-line supervisors, called group managers, who are responsible for the quality of audits done by the auditors they manage. By reviewing audit workpapers during the audit, group managers attempt to identify problems with audit quality and ensure that the problems are corrected. After an audit closes, IRS uses its Examination Quality Measurement System (EQMS) to collect information about the audit process, changes to the process, level of audit quality, and success of any efforts to improve the process and quality. EQMS staff are to review audit workpapers and assess the degree to which the auditor complied with the audit standards. To pass a standard, the audit must pass all of the key elements. Our observations about the adequacy of the audit workpapers and supervisory review during audits are based on our work during 1996 and 1997 on IRS’ use of financial status audit techniques. Among other things, this work relied on a random sample of individual tax returns that IRS had audited. This sample excluded audits that were unlikely to use financial status audit techniques because the audit did not look at individual taxpayers’ books and records. Such excluded audits involved those done at service centers and those that only passed through various types of tax adjustments from other activities (e.g., partnership audits and refund claims). This random sample included 354 audits from a population of about 421,000 audits that were opened from October 1994 through October 1995 and closed in fiscal years 1995 or 1996. Each audit covered one or more individual income tax returns. The sample of audits from our previous work focused on the frequency in which IRS auditors used financial status audit techniques, rather than on the adequacy of audit workpapers. Consequently, we did not do the work necessary to estimate the extent to which workpapers met IRS’ workpaper standard for the general population of audits. However, our work did identify several cases in which audit workpapers in our sample did not meet IRS’ workpaper standard. We held follow-up discussions about the workpaper and supervisory review requirements, as well as about our observations, with IRS Examination Division officials. On the basis of these discussions, we agreed to check for documentation of group manager involvement by examining employee performance files for nine of our sample audits conducted out of IRS’ Northern California District Office to get a better idea of how the group managers handle their audit inventories and ensure quality. According to IRS officials, these files may contain documentation on case reviews by group managers even though such documentation may not be in the workpapers. We requested comments on a draft of this report from the Commissioner of Internal Revenue. On March 27, 1998, we received written comments from IRS, which are summarized at the end of this letter and are reproduced in appendix II. These comments have been incorporated into the report where appropriate. We did our work at IRS headquarters in Washington, D.C., and at district offices and service centers in Fresno and Oakland, CA; Baltimore, MD; Philadelphia, PA; and Richmond, VA. Our work was done between January and March, 1998, in accordance with generally accepted government auditing standards. One of IRS’ audit standards covers audit workpapers. In general, IRS requires the audit workpapers to support the auditor’s conclusions that were reached during an audit. On the basis of our review of IRS’ audit workpapers, we found that IRS auditors did not always meet the requirements laid out under this workpaper standard. IRS’ workpaper standard requires that workpapers provide the principal support for the auditor’s report and document the procedures applied, tests performed, information obtained, and conclusions reached. The five key elements for this workpaper standard involve (1) fully disclosing the audit trail and techniques used; (2) being clear, concise, legible, and organized and ensuring that workpaper documents have been initialed, labeled, dated, and indexed; (3) ensuring that tax adjustments recorded in the workpapers agree with IRS Forms 4318 or 4700 and the audit report; (4) adequately documenting the audit activity records; and (5) appropriately protecting taxpayers’ rights to privacy and confidentiality. The following are examples of some of the problems we found during our review of IRS audit workpapers: Tax adjustments shown in the workpapers, summaries, and reports did not agree. For example, in one audit, the report sent to the taxpayer showed adjustments for dependent exemptions and Schedule A deductions. However, neither the workpaper summary nor the workpapers included these adjustments. In another audit, the workpaper summary showed adjustments of about $25,000 in unreported wages, but the report sent to the taxpayer showed adjustments of only about $9,000 to Schedule C expenses. Required documents or summaries were not always in the workpaper bundle. For example, we found instances of missing or incomplete activity records and missing workpaper summaries. Workpapers that were in the bundle were not always legible or complete. The required information that was missing included the workpaper number, tax year being audited, date of the workpaper, and auditor’s name or initials. Although we are unable to develop estimates of the overall quality of audit workpapers, IRS has historically found problems with the quality of its workpapers. This observation is supported by evaluations conducted as part of IRS’ EQMS, which during the past 6 years (1992-97) indicated that IRS auditors met all of the key elements of the workpaper standard in no more than 72 percent of the audits. Table 1 shows the percentage of audits reviewed under EQMS that met all the key elements of the workpaper standard. The success rate, as depicted in table 1, indicates whether all of the key elements within the standard were met. That is, if any one element is not met, the standard is not met. Another indicator of the quality of the audit workpapers is how often each element within a standard meets the criteria of that element. Table I.2 in appendix I shows this rate, which IRS calls the pass rate, for the key elements of the workpaper standard. Workpapers are an important part of the audit effort. They are a tool to use in formulating and documenting the auditor’s findings, conclusions, and recommended adjustments, if any. Workpapers are also used by third-party reviewers as quality control and measurement instruments. Documentation of the auditor’s methodology and support for the recommended tax adjustments are especially important when the taxpayer does not agree with the recommendations. In these cases, the workpapers are to be used to make decisions about how much additional tax is owed by the taxpayer. Inadequate workpapers may result in having the auditor do more work or even in having the recommended adjustment overturned. IRS’ primary quality control mechanism is supervisory review of the audit workpapers to ensure adherence to the audit standards. However, our review of the workpapers in the sampled audits uncovered limited documentation of supervisory review. As a result, the files lacked documentation that IRS group managers reviewed workpapers during the audits to help ensure that the recommended tax adjustments were supported and verified, and that the audits did not unnecessarily burden the audited taxpayers. The IRM requires that group managers review the audit work to assess quality and ensure that audit standards are being met, but it does not indicate how or when such reviews should be conducted. However, the IRM does not require that documentation of this review be maintained in the audit files. We found little documentation in the workpapers that group managers reviewed workpapers before sharing the audit results with the taxpayer. In analyzing the sampled audits, we recorded whether the workpapers contained documentation that a supervisor had reviewed the workpapers during the audit. We counted an audit as having documentation of being reviewed if the group manager made notations in the workpapers on the audit findings or results; we also counted audits in which the workpapers made some reference to a discussion with the group manager about the audit findings. On the basis of our analysis of the sampled audits closed during fiscal years 1995 and 1996, we estimated that about 6 percent of the workpapers in the sample population contained documentation of group manager review during the audits. In discussions about our estimate with IRS Examination Division officials, they noted that all unagreed audits (i.e., those audits in which the taxpayers do not agree with the tax adjustments) are to be reviewed by the group managers, and they pointed to the manager’s initials on the notice of deficiency as documentation of this review. We did not count reviews of these notices in our analysis because they occurred after IRS sent the original audit report to the taxpayer. If we assume that workpapers for all unagreed audits were reviewed, our estimate on the percentage of workpapers with documentation of being reviewed increases from 6 percent to about 26 percent. Further, we analyzed all unagreed audits in our sample to see how many had documentation of group manager review during the audit, rather than after the audit results were sent to the taxpayer; this would be the point at which the taxpayer either would agree or disagree with the results. We found documentation of such a review in 12 percent of the unagreed audits. The Examination Division officials also said that a group manager may review the workpapers without documentation of that review being recorded in the workpapers. Further, they said that group managers had limited time to review workpapers due to many other responsibilities. The officials also told us that group managers can be involved with audits through means other than review of the workpapers. They explained that these managers monitor their caseload through various processes, such as evaluations of auditors’ performance during or after an audit closes, monthly discussions with auditors about their inventory of audits, reviews of auditors’ time charges, reviews of audits that have been open the longest, and visits to auditors located outside of the district office. The Examination Division officials also noted that any time the audit is expanded, such as by selecting another of the taxpayer’s returns or adding a related taxpayer or return, this action must be approved by the group manager. According to these officials, these other processes may involve a review of audit workpapers, but not necessarily during the audit. We agreed that we would check for documentation of these other processes in our nine sample audits from IRS’ District Office located in Oakland. We found documentation of workload reviews for one of these nine sample audits. In these monthly workload reviews, supervisors are to monitor time charges to an audit. In one other audit, documentation showed that a special unit within the Examination Division reviewed and made changes to the form used to record data for input into IRS’ closed audits database. However, none of this documentation showed supervisory review of the audit workpapers. If any other forms of supervisory involvement with these audits had occurred, the documentation either had been removed from the employee performance file as part of IRS’ standard procedure or was not maintained in a way that we could relate it back to a specific taxpayer. As a result, we do not know how frequently these other processes for supervisory involvement occurred and whether substantive reviews of the audits were part of these processes. IRS is currently drafting changes to the IRM relating to workpapers. In the draft instructions, managers are required to document managerial involvement. This documentation may include signatures, notations in the activity record, or summaries of discussions in the workpapers. When completed, this section is to become part of the IRM’s section on examination of returns. According to an IRS official, comments from IRS’ field offices on the draft changes are not due into headquarters until May 1998. IRS audits tax returns to ensure that taxpayers pay the correct amount of tax. If auditors do quality work, IRS is more likely to meet this goal while minimizing the burden on taxpayers. Quality audits should also encourage taxpayers to comply voluntarily. Supervisory review during the audits is a primary tool in IRS’ efforts to control quality. IRS requires group managers to ensure the quality of the audits, leaving much discretion on the frequency and nature of their reviews during an audit. IRS officials noted that group managers are to review workpapers if taxpayers disagree with the auditor’s report on any recommended taxes. The IRM does not specifically require that all of these supervisory reviews be documented in the workpapers, even though generally accepted government auditing standards do require such documentation. However, recent draft changes to the IRM may address this issue by requiring such documentation. We found little documentation of such supervisory reviews, even though these reviews can help to avoid various problems. For example, supervisory review could identify areas that contribute to IRS’ continuing problems in creating audit workpapers that meet its standard for quality. Since fiscal year 1992, the quality of workpapers has been found wanting by IRS’ EQMS. Inadequately documented workpapers raise questions about whether supervisory review is controlling audit quality as intended. These questions cannot be answered conclusively, however, because the amount of supervisory review cannot be determined. The lack of documentation on workpaper review raises questions about the extent of supervisory involvement with the audits. Proposed changes to the IRM’s sections on examination of returns require documentation of management involvement in the audit process. We recommend that the IRS Commissioner require audit supervisors to document their review of audit workpapers as a control over the quality of audits and the associated workpapers. On March 25, 1998, we met with IRS officials to obtain comments on a draft of this report. These officials included the Acting Deputy Chief Compliance Officer, the Assistant Commissioner for Examination and members of his staff, and a representative from IRS’ Office of Legislative Affairs. IRS documented its comments in a March 27, 1998, letter from the IRS Commissioner, which we have reprinted in appendix II. In this letter, IRS agreed to make revisions to the IRM instructions for the purpose of implementing our recommendation by October 1998. The letter included an appendix outlining adoption plans. The IRS letter also expressed two concerns with our draft report. First, IRS said our conclusion about the lack of evidence of supervisory review of audit workpapers was somewhat misleading and pointed to examples of other managerial practices, such as on-the-job visitations, to provide oversight and involvement in cases. We do not believe our draft report was misleading. As IRS acknowledges in its letter, when discussing the lack of documentation of supervisory review, we also described these other managerial practices. Second, IRS was concerned that our draft report appeared to consider these other managerial practices insufficient. Our draft report did not discuss the sufficiency of these practices but focused on the lack of documentation of supervisory review, including these other managerial practices. We continue to believe that documentation of supervisory review of workpapers is needed to help ensure quality control over the workpapers and audits. At the March 25, 1998, meeting, IRS provided technical comments to clarify specific sections of the draft report that described IRS processes. IRS officials also discussed the distinction between supervisory review and documentation of that review. We have incorporated these comments into this report where appropriate. We are sending copies of this report to the Subcommittee’s Ranking Minority Member, the Chairmen and Ranking Minority Members of the House Ways and Means Committee and the Senate Committee on Finance, various other congressional committees, the Director of the Office of Management and Budget, the Secretary of the Treasury, the IRS Commissioner, and other interested parties. We will also make copies available to others upon request. Major contributors to this report are listed in appendix III. If you have any questions concerning this report, please contact me at (202) 512-9110. The Office of Compliance Specialization, within the Internal Revenue Service’s (IRS) Examination Division, has responsibility for Quality Measurement Staff operations and the Examination Quality Measurement System (EQMS). Among other uses, IRS uses EQMS to measure the quality of closed audits against nine IRS audit standards. The standards address the scope, audit techniques, technical conclusions, workpaper preparation, reports, and time management of an audit. Each standard includes additional key elements describing specific components of a quality audit. Table I.1 summarizes the standards and the associated key elements. Table I.1: Summary of IRS’ Examination Quality Measurement System Auditing Standards (as of Oct. 1996) Measures whether consideration was given to the large, unusual, or questionable items in both the precontact stage and during the course of the examination. This standard encompasses, but is not limited to, the following fundamental considerations: absolute dollar value, relative dollar value, multiyear comparisons, intent to mislead, industry/business practices, compliance impact, and so forth. Measures whether the steps taken verified that the proper amount of income was reported. Gross receipts were probed during the course of examination, regardless of whether the taxpayer maintained a double entry set of books. Consideration was given to responses to interview questions, the financial status analysis, tax return information, and the books and records in probing for unreported income. Measures whether consideration was given to filing and examination potential of all returns required by the taxpayer including those entities in taxpayer’s sphere of influence/responsibility. Required filing checks consist of the analysis of return information and, when warranted, the pick-up of related, prior and subsequent year returns. In accordance with Internal Revenue Manual 4034, examinations should include checks for filing information returns. (continued) Measures whether the issues examined were completed to the extent necessary to provide sufficient information to determine substantially correct tax. The depth of the examination was determined through inspection, inquiry, interviews, observation, and analysis of appropriate documents, ledgers, journals, oral testimony, third-party records, etc., to ensure full development of relevant facts concerning the issues of merit. Interviews provided information not available from documents to obtain an understanding of the taxpayer’s financial history, business operations, and accounting records in order to evaluate the accuracy of books/records. Specialists provided expertise to ensure proper development of unique or complex issues. Measures whether the conclusions reached were based on a correct application of tax law. This standard includes consideration of applicable law, regulations, court cases, revenue rulings, etc. to support technical/factual conclusions. Measures whether applicable penalties were considered and applied correctly. Consideration of the application of appropriate penalties during all examination is required. Measures the documentation of the examination’s audit trail and techniques used. Workpapers provided the principal support for the examiner’s report and documented the procedures applied, tests performed, information obtained, and the conclusions reached in the examination. Measures the presentation of the audit findings in terms of content, format, and accuracy. Addresses the written presentation of audit findings in terms of content, format, and accuracy. All necessary information is contained in the report, so that there is a clear understanding of the adjustments made and the reasons for those adjustments. Measures the utilization of time as it relates to the complete audit process. Time is an essential element of the Auditing Standards and is a proper consideration in analyses of the examination process. The process is considered as a whole and at examination initiation, examination activities, and case closing stages. (Table notes on next page) IRS uses the key element pass rate as one measure of audit quality. This measure computes the percentage of audits demonstrating the characteristics defined by the key element. According to IRS, the key element pass rate is the most sensitive measurement and is useful when describing how an audit is flawed, establishing a baseline for improvement, and identifying systemic changes. Table I.2 shows the pass rates for the key elements of the workpaper standard for fiscal years 1992 through 1997 for office and field audits. Table I.2: Key Element Pass Rate for EQMS Workpaper Standard for District Audits From Fiscal Years 1992-97 Key element pass rate by fiscal year1995 (10/94-3/95) 1995 (4/95-9/95) Legend: n/a = not applicable The key element “Disclosure” was added in the middle of fiscal year 1995. Kathleen E. Seymour, Evaluator-in-Charge Louis G. Roberts, Senior Evaluator Samuel H. Scrutchins, Senior Data Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO reviewed the condition of the Internal Revenue Service's (IRS) audit workpapers, including the documentation of supervisory review. GAO noted that: (1) during its review of IRS' financial status audits, the workpapers did not always meet the requirements under IRS' workpaper standards; (2) standards not met in some audit workpapers included the expectation that: (a) the amount of tax adjustments recorded in the workpapers would be the same as the adjustment amounts shown in the auditor's workpaper summary and on the report sent to the taxpayer; and (b) the workpaper files would contain all required documents to support conclusions about tax liability that an auditor reached and reported to the taxpayer; (3) these shortcomings with the workpapers are not new; (4) GAO found documentation on supervisory review of workpapers prepared during the audits in an estimated 6 percent of the audits in GAO's sample; (5) in the remaining audits, GAO found no documentation that the group managers reviewed either the support for the tax adjustments or the report communicating such adjustments to the taxpayer; (6) IRS officials indicated that all audits in which the taxpayer does not agree with the recommended adjustments are to be reviewed by the group managers; (7) if done, this review would occur after the report on audit results was sent to the taxpayer; (8) even when GAO counts all such unagreed audits, those with documentation of supervisory review would be an estimated 26 percent of the audits in GAO's sample population; (9) GAO believes that supervisory reviews and documentation of such reviews are important because they are IRS' primary quality control process; (10) proper reviews done during the audit can help ensure that audits minimize burden on taxpayers and that any adjustments to taxpayers' liabilities are supported; (11) although Examination Division officials recognized the need for proper reviews, they said IRS group managers cannot review workpapers for all audits because of competing priorities; (12) these officials also said that group managers get involved in the audit process in ways that may not be documented in the workpapers; (13) they stated that these group managers monitor auditors' activities through other processes, such as by reviewing the time that auditors spent on an audit, conducting on-the-job visits, and talking to auditors about their cases and audit inventory; and (14) in these processes, however, the officials said that group managers usually were not reviewing workpapers or validating the calculations used to recommend adjustments before sending the audit results to the taxpayer. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
DOD’s health system, TRICARE, currently offers health care coverage to approximately 6.6 million active duty and retired military personnel under age 65 and their dependents and survivors. An additional 1.5 million retirees aged 65 and over can obtain care when space is available. TRICARE offers three health plans: TRICARE Standard, a fee-for-service plan; TRICARE Extra, a preferred provider plan; and TRICARE Prime, a managed care plan. In addition, TRICARE offers prescription drugs at no cost from MTF pharmacies and, with co-payments, from retail pharmacies and DOD’s National Mail Order Pharmacy. Retirees have access to all of TRICARE’s health plans and benefits until they turn 65 and become eligible for Medicare. Subsequently, they can only use military health care on a space-available basis, that is, when MTFs have unused capacity after caring for higher priority beneficiaries. However, MTF capacity varies from a full range of services at major medical centers to limited outpatient care at small clinics. Moreover, the amount of space available in the military health system has decreased during the last decade with the end of the Cold War and subsequent downsizing of military bases and MTFs. Recent moves to contain costs by relying more on military care and less on civilian providers under contract to DOD have also contributed to the decrease in space-available care. Although some retirees age 65 and over rely heavily on military facilities for their health care, most do not, and over 60 percent do not use military health care facilities at all. In addition to using DOD resources, retirees may receive care paid for by Medicare and other public or private insurance for which they are eligible. However, they cannot use their Medicare benefits at MTFs, and Medicare is generally prohibited by law from paying DOD for health care. Medicare is a federally financed health insurance program for persons age 65 and over, some people with disabilities, and people with end-stage kidney disease. Eligible beneficiaries are automatically covered by part A, which covers inpatient hospital, skilled nursing facility, and hospice care, as well as home health care that follows a stay in a hospital or skilled nursing facility. They also can pay a monthly premium to join part B, which covers physician and outpatient services as well as those home health services not covered under part A. Traditional Medicare allows beneficiaries to choose any provider that accepts Medicare payment and requires beneficiaries to pay for part of their care. Most beneficiaries have supplemental coverage that reimburses them for many costs not covered by Medicare. Major sources of this coverage include employer-sponsored health insurance; “Medigap” policies, sold by private insurers to individuals; and Medicaid, a joint federal-state program that finances health care for low-income people. The alternative to traditional Medicare, Medicare+Choice, offers beneficiaries the option of enrolling in managed care or other private health plans. All Medicare+Choice plans cover basic Medicare benefits, and many also cover additional benefits such as prescription drugs. Typically, these plans have limited cost sharing but restrict members’ choice of providers and may require an additional monthly premium. Under the Medicare subvention demonstration, DOD established and operated Medicare+Choice managed care plans, called TRICARE Senior Prime, at six sites. Enrollment in Senior Prime was open to military retirees enrolled in Medicare part A and part B who resided within the plan’s service area. About 125,000 dual eligibles (military retirees who were also eligible for Medicare) lived in the 40-mile service areas of the six sites—about one-fifth of all dual eligibles nationwide living within an MTF’s service area. DOD capped enrollment at about 28,000 for the demonstration as a whole. Over 26,000 enrolled—about 94 percent of the cap. In addition, retirees enrolled in TRICARE Prime could “age in” to Senior Prime upon reaching age 65, even if the cap had been reached, and about 6,800 did so. Beneficiaries enrolled in the program paid the Medicare part B premium, but no additional premium to DOD. Under Senior Prime, all primary care was provided at MTFs, although DOD purchased some hospital and specialty care from its network of civilian providers. Senior Prime enrollees received the same priority for care at the MTFs as younger retirees enrolled in TRICARE Prime. Care at the MTFs was free of charge for enrollees, but they had to pay any applicable cost-sharing amounts for care in the civilian network (for example, $12 for an office visit). The demonstration authorized Medicare to pay DOD for Medicare-covered health care services provided to retirees at an MTF or through private providers under contract to DOD. As established in the BBA, capitation rates—fixed monthly payments for each enrollee—for the demonstration were discounted from what Medicare would pay private managed care plans in the same areas. However, to receive payment, DOD had to spend at least as much of its own funds in serving this dual-eligible population as it had in the recent past. The six demonstration sites are each in a different TRICARE region and include 10 MTFs that vary in size and types of services offered. (See table 1.) The five MTFs that are medical centers offer a wide range of inpatient services and specialty care as well as primary care. They accounted for over 75 percent of all enrollees in the demonstration, and the two San Antonio medical centers had 38 percent of all enrollees. MTFs that are community hospitals are smaller, have more limited capabilities, and could accommodate fewer Senior Prime enrollees. At these smaller facilities, the civilian network provides much of the specialty care. At Dover, the MTF is a clinic that offers only outpatient services, thus requiring all inpatient and specialty care to be obtained at another MTF or purchased from the civilian network. Compared with their access to care before the demonstration, many enrollees reported that their access to care overall—their ability to get care when they needed it—had improved. They reported better access to MTFs as well as to doctors. Although at the start of the demonstration enrollees had reported poorer access to care than nonenrollees, by the end of the demonstration about 90 percent of both groups said that they could get care when they needed it. Enrollees’ own views are supported by administrative data: they got more care than they had received from Medicare and DOD combined before the demonstration. However, most nonenrollees who had relied on MTFs before the demonstration were no longer able to rely on military health care. Most enrollees reported that their ability to get care when they needed it was not changed by the demonstration, but those who did report a change were more likely to say that their access to care—whether at MTFs or from the civilian network—had improved. (See table 2.) When asked specifically about their access to MTF care, those who had not used MTFs in the past reported the greatest improvement. (See figure 1.) About one-third of all enrollees said that their access to physicians had improved, and a significantly smaller fraction said that it had declined. For example, 32 percent of enrollees said that, under the demonstration, their primary care doctor’s office hours were more convenient, while 20 percent said they were less so. Similarly, enrollees said that they did not have to wait too long to get an appointment with a doctor and, once they reached the office, their doctor saw them more promptly. (See table 3.) For two aspects of access, however, Senior Prime enrollees’ experience was mixed. TRICARE has established standards for the maximum amount of time that should elapse in different situations between making an appointment and seeing a doctor: 1 month for a well-patient visit, 1 day for an urgent care visit, and 1 week for routine visits. According to TRICARE policy, MTFs should meet these standards 90 percent of the time. While Senior Prime met the standards for the time it took to get an appointment and see a doctor for well-patient visits (like a physical), it fell slightly short of the standard for urgent care visits (such as for an acute injury or illness like a broken arm or shortness of breath) and, more markedly, for routine visits (such as for minor injuries or illnesses like a cold or sore throat). (See table 4.) When asked about their ability to choose their own primary care doctors, enrollees were somewhat more likely to say that it was more difficult than before the demonstration. This is not surprising, in view of the fact that Senior Prime assigned a primary care doctor (or nurse) to each enrollee. However, regarding specialists, enrollees said that their choice of doctors had improved. Enrollees reported fewer financial barriers to access under Senior Prime. They said that their out-of-pocket spending decreased and was more reasonable than before. By the demonstration’s end, nearly two-thirds said that they had no out-of-pocket costs. Even at the smaller demonstration sites, where care from the civilian network, which required co-payments, was more common, about half of enrollees said they had no out-of-pocket costs. These enrollee reports of better access under Senior Prime are largely supported by DOD and Medicare administrative data. Enrollees received more services from Senior Prime than they had obtained before the demonstration from MTFs and Medicare combined. Specifically, their use of physicians increased from an average 12 physician visits per year before enrolling in Senior Prime to 16 visits per year after enrollment, and the number of hospital stays per person also increased by 19 percent. Enrollees’ use of services not only increased under Senior Prime—as did other measures of access to care—but exceeded the average level in the broader community. Enrollees used significantly more care than their Medicare fee-for-service counterparts. These differences cannot be explained by either age or health—enrollees were generally younger and healthier. Adjusted for demographics and health conditions, physician visits were 58 percent more frequent for Senior Prime enrollees than for their Medicare counterparts, and hospital stays were 41 percent more frequent. Nonetheless, enrollees’ hospital stays—adjusted for demographics and health conditions—were about 4 percent shorter. We found three probable explanations for enrollees’ greater use of hospital and outpatient care: Lower cost-sharing. Research confirms the commonsense view that patients use more care if it is free. Whereas in traditional Medicare the beneficiary must pay part of the cost of care—for example, 20 percent of the cost of an outpatient visit—in Senior Prime all primary care and most specialty care is free. Lack of strong incentives to limit utilization. Although MTFs generally tried to restrain inappropriate utilization, they did not have strong financial incentives to do so. MTFs cannot spend more than their budget, but space-available care acts as a safety valve: that is, when costs appear likely to exceed funding, space-available care can be reduced while care to Senior Prime enrollees remains unaffected. MTFs also had no direct incentive to limit the use of purchased care, which is funded centrally, and the managed care contractors also lacked an incentive, since they were not at financial risk for Senior Prime. Practice styles. Military physicians’ training and experience, as well as the practice styles of their colleagues, also affect their readiness to hospitalize patients as well as their recommendations to patients about follow-up visits and referrals to specialists. Studies have shown that the military health system has higher utilization than the private sector. Given that military physicians tend to spend their careers in the military with relatively little exposure to civilian health care’s incentives and practices, it is not surprising that these patterns of high use would persist. Although nonenrollees generally were not affected by the demonstration, the minority who had been using space-available MTF care were affected because space-available care declined. This decline is shown in our survey results, and is confirmed by DOD’s estimate of the cost of space-available care, which decreased from $183 million in 1996 to $72 million in 1999, the first full year of the demonstration. However, for most nonenrollees, this decline was not an issue, because they did not use MTFs either before or during the demonstration. Furthermore, of those who depended on MTFs for all or most of their care before the demonstration, most enrolled in Senior Prime, thereby assuring their continued access to care. (See figure 2.) Since there was less space-available care than in the past, many of those who had previously used MTFs and did not enroll in Senior Prime were “crowded-out.” Crowd-out varied considerably, depending both on the types of services that nonenrollees needed and the types of physicians and space available at MTFs. Nonenrollees who required certain services were crowded out while others at the same MTF continued to receive care. We focus on nonenrollees who experienced a sharp decline in MTF care: those who said they had received most or all of their care at MTFs before the demonstration but got no care or only some care at MTFs during the demonstration. Of those nonenrollees who had previously depended on MTFs for their care, over 60 percent (about 4,600 people) were crowded out. (See figure 3.) The small number of nonenrollees—10 percent of the total—that had depended on MTFs for their care before the demonstration limited crowd- out. (See figure 4.) Consequently, only a small proportion of all nonenrollees—about 6 percent—was crowded out. Somewhat surprisingly, a small number of nonenrollees who had not previously used MTFs began obtaining all or most of their care at MTFs. Although Medicare fee-for-service care increased for those who were crowded out of MTF care, the increase in Medicare outpatient care was not nearly large enough to compensate for the loss of MTF care. (See figure 5.) Retirees who were crowded out had somewhat lower incomes than other nonenrollees and were also less likely to have supplemental insurance, suggesting that some of them may have found it difficult to cover Medicare out-of-pocket costs. By the end of the initial demonstration period, less than half of all nonenrollees said they were able to get care at MTFs when they needed it, a modest decline from before the demonstration. Enrollees’ improved access to care had both positive and negative consequences. Many enrollees in Senior Prime reported that they were more satisfied with nearly all aspects of their care. Some results were neutral: enrollees’ self-reported health status did not change and health outcomes, such as mortality and preventable hospitalizations, were no better than those achieved by nonenrolled military retirees. However, enrollees’ heavy use of health services resulted in high per-person costs for DOD compared to costs of other Medicare beneficiaries. Satisfaction with almost all aspects of care increased for enrollees. Moreover, by the end of the demonstration, their satisfaction was generally as high as that of nonenrollees. Patients’ sense of satisfaction or dissatisfaction with their physicians reflects in part their perceptions of their physicians’ clinical and communication skills. Under Senior Prime, many enrollees reported greater satisfaction with both their primary care physicians and specialists. Specifically, enrollees reported greater satisfaction with their physicians’ competence and ability to communicate—to listen, explain, and answer questions, and to coordinate with other physicians about patients’ care. (See table 5.) Senior Prime did not appear to influence three key measures of health outcomes—the mortality rate, self-reported health status, and preventable hospitalizations. Mortality rate. Although there were slightly more deaths among nonenrollees, the difference between enrollees and nonenrollees disappears when we adjust for retirees’ age and their health conditions at the start of the demonstration. Health status. We also found that Senior Prime did not produce any improvement in enrollees’ self-reported health status. We base this on enrollees’ answers to our questions about different aspects of their health, including their ratings of their health in general and of specific areas, such as their ability to climb several flights of stairs. This finding is not surprising, given the relatively short time interval—an average of 19 months—between our two surveys. We also found that, like enrollees, nonenrollees did not experience a significant change in health status. Preventable hospitalizations. The demonstration did not have a clear effect on preventable hospitalizations—those hospitalizations that experts say can often be avoided by appropriate outpatient care. Among patients who had been hospitalized for any reason, the rate of preventable hospitalizations was slightly higher for Senior Prime enrollees than for their Medicare fee-for-service counterparts. However, when all those with chronic diseases—whether hospitalized or not— were examined, the rate among Senior Prime enrollees was lower. A less desirable consequence of enrollees’ access to care was its high cost for DOD. Under Senior Prime, DOD’s costs were significantly higher than Medicare fee-for-service costs for comparable patients and comparable benefits. These higher costs did not result from Senior Prime enrollees being sicker or older than Medicare beneficiaries. Instead, they resulted from heavier use of hospitals and, especially, greater use of doctors and other outpatient services. In other words, the increased ability of Senior Prime enrollees to see physicians and receive care translated directly into high DOD costs for the demonstration. From the perspective of enrollees, Senior Prime was highly successful. Their satisfaction with nearly all aspects of their care increased, and by the end of the demonstration enrollees were in general as satisfied as nonenrollees, who largely used civilian care. However, enrollees’ utilization and the cost of their care to DOD were both higher. Although subvention is not expected to continue, the demonstration raises a larger issue for DOD: can it achieve the same high levels of patient satisfaction that it reached in Senior Prime while bringing its utilization and costs closer to the private sector’s? We provided DOD and CMS an opportunity to comment on a draft of this report, and both agencies provided written comments. DOD said that the report was accurate. It noted that the report did not compare Senior Prime enrollees’ utilization rates with those of Medicare+Choice plans and suggested that our comparison with fee-for-service might be misleading, because it did not take account of the richer benefit package offered by Senior Prime. DOD further stated that the utilization data should cover the full 3 years of the demonstration experience and that utilization might be higher during the initial phase of a new plan. Finally, DOD stated that access and satisfaction for TRICARE Prime enrollees were adversely affected by the demonstration. CMS agreed with the report’s findings and suggested that higher quality of care might be an explanation for Senior Prime enrollees’ higher use of services. (DOD and CMS comments appear in appendixes VI and VII.) In comparing utilization rates with Medicare fee-for-service in the same areas, we chose a comparison group that would be expected to have higher utilization than Senior Prime or any other managed care plan. Fee-for- service beneficiaries can obtain care from any provider without restriction, whereas Medicare+Choice plans typically have some limitations on access. Consequently, the fact that Senior Prime utilization was substantially higher than fee-for-service utilization is striking. As mandated by law, our evaluation covers the initial demonstration period (through December 2000). We therefore did not attempt to obtain information on utilization during 2001 and, in any case, the lag in data reporting would have prevented our doing so. However, during the first 2 full years of the demonstration utilization declined slightly: outpatient visits in 2000 were 2 percent lower than in 1999. As we have reported elsewhere, site officials found little evidence that the demonstration affected TRICARE Prime enrollees’ satisfaction or access to care. Regarding the possible impact of quality of care on use of services, we examined several health outcome indicators and found no evidence of such an effect. We are sending copies of this report to the Secretary of Defense and the Administrator of the Centers for Medicare and Medicaid Services. We will make copies available to others upon request. If you or your staffs have questions about this report, please contact me at (202) 512-7114. Other GAO contacts and staff acknowledgments are listed in appendix VIII. To address the questions Congress asked about Medicare subvention, we fielded a mail survey of military retirees and their family members who were eligible for the subvention demonstration. The survey had two interlocking components: a panel of enrollees and nonenrollees, who were surveyed both at the beginning and the end of the demonstration, and two cross sections or snapshots of enrollees and nonenrollees—one taken at the beginning of the demonstration and the other at the end. To assess those questions that involved change over time, we sampled and surveyed by mail enrollees and nonenrollees, stratified by site, at the beginning of the demonstration. These same respondents were resurveyed from September through December 2000, shortly before the demonstration’s initial period ended. Because a prior report describes our initial survey, this appendix focuses on our second survey. To conduct the second round of data collection, we began with 15,223 respondents from the first round of surveys. To be included in the panel, three criteria had to be met: (1) the person must still be alive, (2) the person must still reside in an official demonstration area, and (3) the person must have maintained the same enrollment status, that is, enrolled or not enrolled. Based on these criteria we mailed 13,332 surveys to our panel sample of enrollees and nonenrollees. Starting with a sample of 13,332 retirees and their family members, we obtained usable questionnaires from 11,986 people, an overall response rate of 91 percent. (See table 6, which also shows the adjustments to the initial sample and to the estimated population size. See table 7 for the reasons for nonresponse.) To enable comparisons between enrollees and nonenrollees at the end of the demonstration, the second survey was augmented to include persons who had enrolled since the first survey as well as additional nonenrollees. The overall composition of the Senior Prime enrollee population had changed from the time of our first survey. When we drew our second sample in July 2000, 36 percent of all enrollees were new—that is, they had enrolled since our first survey—and over two-fifths of them were age-ins who had turned 65 since the demonstration started. From the time of our first survey to the time of our second survey, only 861 people had disenrolled from Senior Prime. Therefore, we surveyed all voluntary disenrollees. Data from all respondents—those we surveyed for the first time as well as those in the panel—were weighted, to yield a representative sample of the demonstration population at the end of the program. The sample for the cross section study included the panel sample as well as the augmented populations. We defined our population as all Medicare- eligible military retirees living in the demonstration sites and eligible for Senior Prime. The sample of new enrollees was drawn from all those enrolled in the demonstration according to the Iowa Foundation’s enrollment files. The supplemental sample of nonenrollees was drawn from all retirees age 65 and over in the Defense Enrollment Eligibility Reporting System who (1) had both Medicare part A and part B coverage, (2) lived within the official demonstration zip codes, (3) were not enrolled in Senior Prime, and (4) were not part of our first sample. We stratified our sample of new enrollees and new nonenrollees by site and by whether they aged in. We oversampled each stratum to have a large enough number to conduct analyses of subpopulations. The total sample for all sites was 23,967, drawn from a population of 117,618. Starting with a sample of 23,967 retirees and their family members, we obtained complete and usable questionnaires from 20,870 people, an overall response rate of 88 percent. (See table 8, which also shows the adjustments to the initial sample and to the estimated population size. See table 9, which shows the reasons for nonresponse.) Response rates varied across sites and subpopulations. Rates ranged from 95.3 percent among aged-in new enrollees to 66.7 percent among disenrollees. The original questionnaire that was sent to our panel sample was created based on a review of the literature and five existing survey instruments. In addition, we pretested the instrument with several retiree groups. For the second round of data collection, we created four different versions of the questionnaire, based on the original questionnaire. The four versions were nearly the same, with some differences in the sections on Senior Prime and health insurance coverage. (See table 10 for a complete list of all the survey questions used in our analyses.) For the panel sample, our objective was to collect the same data at two points in time. Therefore, in constructing the questionnaires for the panel enrollees and panel nonenrollees we essentially used the same instrument as the original survey to answer questions about the effect of the demonstration on access to care, quality of care, health care use, and out- of-pocket costs. However, we modified our questions about plan satisfaction and health insurance coverage. In constructing the questionnaires for the new enrollees, we generally adopted the same questions in the panel enrollee instrument to measure access to care, quality of care, health care use, and out-of-pocket costs. However, we also asked the new enrollees about their health care experiences in the 12 months before they joined Senior Prime. For new nonenrollees, we were able to use the same instrument as we had used for the panel nonenrollees, because their health care experiences were not related to tenure in Senior Prime. Finally, the disenrollee questionnaire, like the other versions, did not change from the original instrument in the measures on access to care, quality of care, health care use, and out-of- pocket costs. However, we added questions on the reasons for disenrollment. To detect the effects the demonstration had on both enrollees’ and nonenrollees’ access to care and satisfaction with care, we compared the differences between survey responses at both points in time and among each demonstration site. For most questions, retirees were asked both before the demonstration and at the end of the demonstration how much they agreed or disagreed with each statement. They were given five possible answers: strongly agree, agree, neither agree nor disagree, disagree, and strongly disagree. To calculate change, responses were assigned a numeric value on a five-point scale, with five being the highest and one being the lowest. To properly quantify the response, some scales had to be reversed. Where necessary, questions were rescaled so that “agree” represents a positive answer and “disagree” a negative answer. To obtain a measure of change, the value of the response from the first survey was subtracted from the value of the response from the second survey. A positive value indicates improvement, a negative value indicates decline. The net improvement is calculated as the difference between the proportion of respondents within each sample population who improved and the proportion of those who declined. Four separate significance tests were performed. (See table 11.) The first test was for net improvement (the difference between improved and declined) among enrollees. The second test was for net improvement among nonenrollees. The third test was for the difference of net improvement between enrollees and nonenrollees. Finally, we tested whether the net improvement for each site is significantly different from the net improvement of the other sites. (See tables 11 and 12.) In addition to the change of access and quality among enrollees and nonenrollees, we also examined the level of access and quality at the time of the second survey among the cross section sample. (See table 12.) Three separate significance tests were performed. The first test of significance was between enrollees and nonenrollees who said they strongly agreed with each statement. The second test of significance was between enrollees and nonenrollees who said they either strongly agreed or agreed with each statement. The final test was whether the site percentage differs significantly from the overall percentage. In this appendix, we describe the DOD and Medicare data that we used to analyze utilization. We also summarize the models that we developed to risk adjust acute inpatient care and outpatient care and give results both demonstration wide and by site. For these analyses, we defined the Senior Prime enrollee population as those who had enrolled as of December 31, 1999. We used DOD data for 1999 as the source of our counts of hospital stays and outpatient visits to both MTF and civilian network providers. We limited our analysis to hospital stays of 1 day or more to eliminate inconsistencies between Medicare and TRICARE in the use of same-day discharges. Our counts of outpatient utilization include (1) visits and ambulatory surgeries in MTF outpatient clinics and (2) visits to network providers— doctors’ offices, ambulatory surgeries, hospital emergency rooms, and hospital outpatient clinics. To identify our comparison group of fee-for-service beneficiaries in the demonstration areas, we used CMS’ 20-percent Medicare sample, and extracted those beneficiaries residing in the subvention areas. We excluded anyone who had been in a Medicare+Choice plan for any part of the year. To make the comparison fair, we also excluded certain groups not represented or only minimally represented in Senior Prime: persons with end-stage renal disease (ESRD), Medicaid beneficiaries, persons with disabilities (under age 65), and people who lost Medicare part A or part B entitlement for reasons other than death. We derived our counts of Medicare fee-for-service utilization for the sample from Medicare claims files. For those who were in either Senior Prime or fee-for-service for less than a full year, we estimated full-year utilization counts. We identified a separate comparison group of persons eligible for the demonstration who did not enroll. We collected both Medicare fee-for- service claims and DOD encounter data for the sample of enrollees and nonenrollees who answered both our first and second surveys. In order to compare the utilization of Senior Prime enrollees to Medicare fee-for-service beneficiaries in the demonstration areas, we developed several models of fee-for-service utilization (for hospitalization, length of stay, and outpatient care). We then applied each model to Senior Prime enrollees—taking account of their demographic characteristics and health status—to predict what their utilization would have been in Medicare fee- for-service. The ratio of their predicted utilization to their actual Senior Prime utilization gives a measure of the amount by which Senior Prime utilization exceeded or fell short of fee-for-service utilization for people with the enrollees’ characteristics. Table 13 compares the characteristics of Senior Prime enrollees with Medicare fee-for-service beneficiaries in the demonstration area. Acute hospitalization is a relatively rare event: only one out of five Medicare beneficiaries (in the counterpart 20-percent fee-for-service sample) is hospitalized during the year, and about half of those who are hospitalized are admitted again during the same year. We therefore used Poisson regression, which is designed to predict the number of occurrences (counts) of a rare event during a fixed time frame, to estimate the number of acute hospitalizations. Positive coefficients are interpreted as reflecting factors that increase the hospitalization rate while negative coefficients indicate a decrease in that rate. The strongest factor affecting the number of hospitalizations is the HCC score, which measures how ill and how costly a person is. Its effect is not linear—both squared and cubed terms enter the model. (See table 14.) Diagnostic groupings are based on the International Classification of Diseases, 9th Revision, Clinical Modification (ICD-9-CM). Endocrine, nutritional, and metabolic diseases and immunity disorders Diseases of the nervous system and sense organs Diseases of the musculoskeletal system and connective tissue Supplementary classification (V01-V82) Using the same approach and models, we examined utilization at each site. (See table 16.) Adjusting for risk, both hospital stays and outpatient visits were substantially greater in Senior Prime than in fee-for-service at all sites. However, the differences in length of stay were small, with lengths of stay generally higher in fee-for-service. “Crowd-outs” were nonenrollees who had used MTF care before the demonstration but were unable to do so after the demonstration started. In this report, we define crowd-outs as those 4,594 nonenrollees (6 percent of all nonenrollees) who had, according to their survey answers, received all or most of their care at an MTF before the demonstration but received none or only some of their care at an MTF after the demonstration started. However, as table 17 shows, crowd-out can be defined either more narrowly or more broadly. By the narrowest definition of crowd-out— those nonenrollees who received all of their care at an MTF before the demonstration but none of their care at an MTF after the demonstration started—only 1,498 persons (2 percent of all nonenrollees) were crowded out. However, if we count all those who received less care than before, 12,133 (16 percent of nonenrollees) nonenrollees were crowded out. As expected, many of the 4,594 nonenrollees whom we characterized as crowd-outs changed their attitudes toward military care during the demonstration. As shown in table 18, they reported a decline in access to MTF care as well as lower satisfaction with care in MTFs. However, they did not report significant changes in satisfaction on issues not explicitly connected to MTFs. DOD’s MTF encounter data and network claims data confirmed the self- reports of crowd-outs. The crowd-outs’ MTF outpatient care dropped dramatically during the demonstration and the increase in fee-for-service (FFS) outpatient visits was not sufficient to offset this decline. However, as shown in table 19, there was no decline in acute hospitalizations. In this appendix, we describe our methods for analyzing the effects of the subvention demonstration on three indicators of health outcomes— mortality, health status, and preventable hospitalization. Using our first survey, we calculated the mortality rate from the date of the survey response to January 31, 2001. The source of death information was the Medicare Enrollment Database. We excluded Medicare+Choice members because we could not obtain their diagnoses, which we needed to calculate risk factors. The unadjusted 2-year mortality rate was 0.06 for Senior Prime enrollees and 0.08 for nonenrollees. Although the difference is significant, it disappears when we adjust for individual risk. The adjusted 2-year mortality rate is 0.06 for both enrollees and nonenrollees. (See table 20.) We used the Cox proportional hazard model to calculate individuals’ risk- adjusted mortality rate. A hazard ratio greater than 1 indicates a higher risk of death while a hazard ratio less than 1 indicates a lower risk. For example, a hazard rate for males of 1.5 means that males are 50 percent more likely to die than females, holding other factors constant. Similarly, a hazard rate of 0.5 for retirees with HCC scores in the lowest quartile means that they are 50 percent less likely to die than those with HCC scores in the middle two quartiles, holding other factors constant. Enrollment in Senior Prime did not have a significant effect on mortality. (See table 21 for a description of the factors that entered our model and of their estimated effects.) See Ware, J. E., Kosinski, M., and Keller, S. D., SF-12: How to Score the SF-12 Physical and Mental Health Summary Scales, The Health Institute, New England Medical Center, Second Edition, pp. 12-13. The change in the score between the two times was also insignificant. We examined both the unadjusted score and the adjusted score, using a linear regression model (see table 23), but neither was significant, and enrollment in Senior Prime was not a significant factor in the model. We analyzed preventable hospitalizations—hospital stays that can often be avoided by appropriate outpatient care—using several alternate models. Specifically, we estimated the effect of Senior Prime enrollment on the likelihood of having a preventable hospitalization, adjusting for age, sex, and health conditions. Measures of a person’s health conditions included the HCC score, an index of comorbidities, and the number of recent hospitalizations. In addition, we controlled for the number of outpatient clinic and physician visits, since outpatient care is considered a means of preventing hospitalization. We analyzed data on Senior Prime enrollees and on Medicare fee-for- service beneficiaries who were not military retirees and who lived in the demonstration areas. Within this combined group of enrollees and fee-for- service beneficiaries, we modeled preventable hospitalizations for two populations: (1) those who had been hospitalized in 1999 and (2) those who had at least one chronic disease in 1999—whether they had been hospitalized or not. Our analysis of the demonstration’s effect on preventable hospitalizations yielded inconsistent results. For the first population (hospitalizations), we found that Senior Prime enrollment was associated with more preventable hospitalizations. By contrast, for the second population (the chronically ill), Senior Prime enrollment was associated with fewer preventable hospitalizations. Other GAO staff who made significant contributions to this work included Jessica Farb, Maria Kronenburg and Dae Park. Robin Burke provided technical advice and Martha Wood provided technical advice and assistance. Medicare Subvention Demonstration: DOD Costs and Medicare Spending (GAO-02-67, Oct. 31, 2001). Medicare Subvention Demonstration: DOD’s Pilot Appealed to Seniors, Underscored Management Complexities (GAO-01-671, June 14, 2001). Medicare Subvention Demonstration: Enrollment in DOD Pilot Reflects Retiree Experiences and Local Markets (GAO/HEHS-00-35, Jan. 31, 2000). Defense Health Care: Appointment Timeliness Goals Not Met; Measurement Tools Need Improvement (GAO/HEHS-99-168, Sept. 30, 1999). Medicare Subvention Demonstration: DOD Start-up Overcame Obstacles, Yields Lessons, and Raises Issues (GAO/GGD/HEHS-99-161, Sept. 28, 1999). Medicare Subvention Demonstration: DOD Data Limitations May Require Adjustments and Raise Broader Concerns (GAO/HEHS-99-39, May 28, 1999). The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full-text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO E-mail this list to you every afternoon, go to our home page and complete the easy-to-use electronic order form found under “To Order GAO Products.” Web site: www.gao.gov/fraudnet/fraudnet.htm, E-mail: [email protected], or 1-800-424-5454 (automated answering system). | In the Balanced Budget Act of 1997, Congress established a three-year demonstration, called Medicare subvention, to improve the access of Medicare-eligible military retirees to care at military treatment facilities (MTF). The demonstration allowed Medicare-eligible retirees to get their health care largely at MTFs by enrolling in a Department of Defense (DOD) Medicare managed care organization known as TRICARE Senior Prime. During the subvention demonstration, access to health care for many retirees who enrolled in Senior Prime improved, while access to MTF care for some of those who did not enroll declined. Many enrollees in Senior Prime said they were better able to get care when they needed it. They also reported better access to doctors in general as well as care at MTFs. Enrollees generally were more satisfied with their care than before the demonstration. However, the demonstration did not improve enrollees' self-reported health status. In addition, compared to nonenrollees, enrollees did not have better health outcomes, as measured by their mortality rates and rates of "preventable" hospitalizations. Moreover, DOD's costs were high, reflecting enrollees' heavy use of hospitals and doctors. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The Department of Homeland Security Appropriations Act for Fiscal Year 2007 states that “none of the funds appropriated…shall be obligated for full scale procurement of monitors until the Secretary of Homeland Security has certified…that a significant increase in operational effectiveness will be achieved.” DNDO noted that certification would meet DHS guidelines for the review and approval of complex acquisitions. Specifically, DNDO stated that the Secretary’s decision would be made in the context of DHS “Key Decision Point 3,” which details the review and approval necessary for DHS acquisition programs to move from the “Capability Development and Demonstration” phase to the “Production and Deployment Phase.” To meet the statutory requirement to certify the ASPs will provide a “significant increase in operational effectiveness,” and requirements outlined in DHS Management Directive 1400, DNDO, with input from subject matter experts, developed a series of tests intended to demonstrate, among other things, ASP performance and deployment readiness. The tests were conducted at several venues, including the Nevada Test Site, the New York Container Terminal, the Pacific Northwest National Laboratory, and five ports of entry. DNDO stated that its request for full-scale production approval would be based upon completed and documented results of these tests. To meet the Secretary’s goal of deploying 225 ASPs by the end of calendar year 2008, Secretarial Certification was scheduled for June 26, 2007. To guide the test operations, DNDO defined a set of Critical Operational Issues that outlined the tests’ technical objectives and provided the baseline to measure demonstrated effectiveness. The purpose of the Critical Operational Issue 1 is to “verify operational effectiveness” of ASPs and determine whether “ASP systems significantly increase operational effectiveness relative to the current generation detection and identification system.” DNDO conducted a series of tests at the Nevada Test Site, the single focus of which, according to DNDO, was to resolve Critical Operational Issue 1. According to DNDO, these tests began in February 2007 and concluded in March 2007. DNDO’s Nevada Test Site test plan, dated January 12, 2007, identified three primary test objectives comparing the operational effectiveness of the ASP systems with existing detection and identification systems at current high-volume operational thresholds. Specifically, DNDO sought to determine the ASPs’ probability to (1) detect and identify nuclear and radiological threats (2) discriminate threat and non-threat radionuclides in primary , and (3) detect and identify threat radionuclides in the presence of non-threat radionuclides. The Nevada Test Site test plan had two key components. First, DNDO developed guidelines for basic test operations and procedures, including test goals and expectations, test tasks and requirements, and roles and responsibilities of personnel involved in the testing, including the ASP contractors. The second component involved the National Institute of Standards and Technology developing test protocols that defined, among other things, how many times a container carrying test materials would need to be driven through portal monitors in order to obtain statistically relevant results. DNDO’s tests at the Nevada Test Site were designed to compare the current system—using PVTs in primary inspections and a PVT and RIID combination in secondary inspections—to other configurations including PVTs in primary and ASPs in secondary, and ASPs in both primary and secondary inspection positions. DNDO tested three ASPs and four PVTs. The ASP vendors included Thermo, Raytheon, and Canberra. The PVT vendors included SAIC, TSA, and Ludlum. According to the test plan, to the greatest extent possible, PVT, ASP, and RIID handheld devices would be operated consistent with approved CBP standard operating procedures. Prior to “formal” collection of the data that would be used to support the resolution of Critical Operational Issue 1, DNDO conducted a series of tests it referred to as “dry runs” and “dress rehearsals.” The purpose of the dry runs was to, among other things, verify ASP systems’ software performance against representative test materials and allow test teams and system contractors to identify and implement software and hardware improvements to ASP systems. The purpose of the dress rehearsals was to observe the ASPs in operation against representative test scenarios and allow the test team to, among other things: develop confidence in the reliability of the ASP system so that operators and data analysts would know what to expect and what data to collect during the formal test, collect sample test data, and determine what errors were likely to occur in the data collection process and eliminate opportunities for error. In addition to improving ASP performance through dry runs and dress rehearsals conducted prior to formal data collection, ASP contractors were also significantly involved in the Nevada Test Site test processes. Specifically, the test plan stated that “ contractor involvement was an integral part of the NTS test events to ensure the systems performed as designed for the duration of the test.” Furthermore, ASP contractors were available on site to repair their system at the request of the test director and to provide quality control support of the test data through real time monitoring of available data. DNDO stated that Pacific Northwest National Laboratory representatives were also on site to provide the same services for the PVT systems. DNDO conducted its formal tests in two phases. The first, called Phase 1, was designed to support resolution of Critical Operational Issue 1 with high statistical confidence. DNDO told us on multiple occasions and in a written response that only data collected during Phase 1 would be included in the final report presented to the Secretary to request ASP certification. According to DNDO, the second, called Phase 3, provided data for algorithm development which targeted specific and known areas in need of work and data to aid in the development of secondary screening operations and procedures. According to DNDO documentation, Phase 3 testing was not in support of the full-scale production decision. Further, DNDO stated that Phase 3 testing consisted of relatively small sample sizes since the data would not support estimating the probability of detection with a high confidence level. On May 30, 2007, following the formal tests and the scoring of their results, DNDO told GAO that it had conducted additional tests that DNDO termed “Special Testing.” The details of these tests were not outlined in the Nevada Test Site test plan. On June 20, 2007, DNDO provided GAO with a test plan document entitled “ASP Special Testing” which described the test sources used to conduct the tests but did not say when the tests took place. According to DNDO, special testing was conducted throughout the formal Phase 1 testing process and included 12 combinations of threat, masking, and shielding materials that differed from “dry run,” “dress rehearsal,” and formal tests. DNDO also stated that the tests were “blind,” meaning that neither DNDO testing officials nor the ASP vendors knew what sources would be included in the tests. According to DNDO, these special tests were recommended by subject matter experts outside the ASP program to address the limitations of the original NTS test plan, including available time and funding resources, special nuclear material sources, and the number of test configurations that could be incorporated in the test plan, including source isotope and activity, shielding materials and thicknesses, masking materials, vehicle types, and measurement conditions. Unlike the formal tests, National Institute of Standards and Technology officials were not involved in determining the number of test runs necessary to obtain statistically relevant results for the special tests. Based on our analysis of DNDO’s test plan, the test results, and discussions with experts from four national laboratories, we are concerned that DNDO used biased test methods that enhanced the performance of the ASPs. In the dry runs and dress rehearsals, DNDO conducted many preliminary runs of radiological, nuclear, masking, and shielding materials so that ASP contractors could collect data on the radiation being emitted, and modify their software accordingly. Specifically, we are concerned because almost all of the materials, and most combinations of materials, DNDO used in the formal tests were identical to those that the ASP contractors had specifically set their ASPs to identify during the dry runs and dress rehearsals. It is highly unlikely that such favorable circumstances would present themselves under real world conditions. A key component of the NTS tests was to test the ASPs’ ability to detect and identify dangerous materials, specifically when that material was masked or “hidden” by benign radioactive materials. Based on our analysis, the masking materials DNDO used at NTS did not sufficiently test the performance limits of the ASPs. DOE national laboratory officials raised similar concerns to DNDO after reviewing a draft of the test plan in November 2006. These officials stated that the masking materials DNDO planned to use in its tests did not emit enough radiation to mask the presence of nuclear materials in a shipping container and noted that many of the materials that DOE program officials regularly observe passing through international ports emit significantly higher levels of radiation than the masking materials DNDO used for its tests. DNDO officials told us that the masking materials used at the Nevada Test Site represented the average emissions seen in the stream of commerce at the New York Container Terminal. However, according to data accumulated as part of DOE’s program to secure international ports (the Megaports program), a significant percentage of cargo passing through one European port potentially on its way to the United States has emission levels greater than the average radiation level for cargo that typically sets off radiation detection alarms. Importantly, DNDO officials told us that the masking materials used at the Nevada Test Site were not intended to provide insight into the limits of ASP detection capabilities. Yet, DNDO’s own test plan for “ASP Special Testing” states, “The DNDO ASP NTS Test Plan was designed to… measure capabilities and limitations in current ASP systems.” In addition, the NTS tests did not objectively test the ASPs against the currently deployed radiation detection system. DNDO’s test plan stated that, to the greatest extent possible, PVT, ASP, and RIID handheld devices would be operated consistent with approved CBP standard operating procedures. However, after analyzing test results and procedures used at the Nevada Test Site, CBP officials determined that DNDO had, in fact, not followed a key CBP procedure. In particular, if a threat is identified during a secondary screening, or if the result of the RIID screening isn’t definitive, CBP procedures require officers to send the data to CBP’s Laboratories and Scientific Services for further guidance. DNDO did not include this critical step in its formal tests. CBP officials also expressed concern with DNDO’s preliminary test results when we met with them in May 2007. In regards to the special tests DNDO conducted, based on what DNDO has told us and our own evaluation of the special test plan, we note that because DNDO did not consult NIST on the design of the blind tests, we do not know the statistical significance of the results, and the tests were not entirely blind because some of the nuclear materials used in the blind tests were also used to calibrate the ASPs on a daily basis. During the course of our work, CBP, DOE, and national laboratory officials we spoke to voiced concern about their lack of involvement in the planning and execution of the Nevada Test Site tests. We raised our concerns about this issue and those of DOE and CBP to DNDO’s attention on multiple occasions. In response to these concerns, specifically those posed by DOE, DNDO convened a conference on June 27, 2007, of technical experts to discuss the Nevada test results and the methods DNDO used to test the effects of masking materials on what the ASPs are able to detect. As a result of discussions held during that meeting, subject matter experts agreed that computer-simulated injection studies could help determine the ASPs’ ability to detect threats in the presence of highly radioactive masking material. According to a Pacific Northwest National Laboratory report submitted to DNDO in December 2006, injection studies are particularly useful for measuring the relative performance of algorithms, but their results should not be construed as a measure of (system) vulnerability. To assess the limits of portal monitors’ capabilities, the Pacific Northwest National Laboratory report states that actual testing should be conducted using threat objects immersed in containers with various masking agents, shielding, and cargo. DNDO officials stated at the meeting that further testing could be scheduled, if necessary, to fully satisfy DOE concerns. On July 20, 2007, DHS Secretary Chertoff notified certain members of the Congress that he planned to convene an independent expert panel to review DNDO’s test procedures, test results, associated technology assessments, and cost-benefit analyses to support the final decision to deploy ASPs. In making this announcement, Secretary Chertoff noted the national importance of developing highly effective radiation detection and identification capabilities as one of the main reasons for seeking an independent review of DNDO’s actions. On August 30, 2007, the DHS Undersecretary for Management recommended that the Secretary of Homeland Security delay Secretarial Certification of ASPs for an additional two months. According to DHS, the current delay is in order to provide CBP more time to field ASP systems, a concern CBP had raised early in our review. Effectively detecting and identifying radiological or nuclear threats at U.S. borders and ports of entry is a vital matter of national security, and developing new and advanced technology is critical to U.S. efforts to prevent a potential attack. However, it is also critical to fully understand the strengths and weaknesses of any next generation radiation detection technology before it is deployed in the field and to know, to the greatest extent possible, when or how that equipment may fail. In our view, the tests conducted by DNDO at the Nevada Test Site between February and March 2007 used biased test methods and were not an objective assessment of the ASPs’ performance capabilities. We believe that DNDO’s test methods—specifically, conducting dry runs and dress rehearsals with contractors prior to formal testing—enhanced the performance of the ASPs beyond what they are likely to achieve in actual use. Furthermore, the tests were not a rigorous evaluation of the ASPs’ capabilities, but rather a developmental demonstration of ASP performance under controlled conditions which did not test the limitations of the ASP systems. As a result of DNDO’s test methods and the limits of the tests—including a need to meet a secretarial certification deadline and the limited configurations of special nuclear material sources, masking, and shielding materials used—we believe that the results of the tests conducted at the Nevada Test Site do not demonstrate a “significant increase in operational effectiveness” relative to the current detection system, and cannot be relied upon to make a full-scale production decision. We recommend that the Secretary of Homeland Security take the following actions: Delay Secretarial Certification and full-scale production decisions of the ASPs until all relevant tests and studies have been completed and limitations to these tests and studies have been identified and addressed. Furthermore, results of these tests and studies should be validated and made fully transparent to DOE, CBP, and other relevant parties. Once the tests and studies have been completed, evaluated, and validated, DHS should determine in cooperation with CBP, DOE, and other stakeholders including independent reviewers, if additional testing is needed. If additional testing is needed, the Secretary should appoint an independent group within DHS, not aligned with the ASP acquisition process, to conduct objective, comprehensive, and transparent testing that realistically demonstrates the capabilities and limitations of the ASP system. This independent group would be separate from the recently appointed independent review panel. Finally, the results of the tests and analyses should be reported to the appropriate congressional committees before large scale purchases of ASP’s are made. Mr. Chairman, this concludes our prepared statement. We would be happy to respond to any questions you or other members of the subcommittee may have. For further information about this testimony, please contact me, Gene Aloise, at (202) 512-3841 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Erika D. Carter, Alison O’Neill, Jim Shafer, Daren Sweeney, and Eugene Wisnoski made key contributions to this statement. Combating Nuclear Smuggling: DHS’s Decision to Procure and Deploy the Next Generation of Radiation Detection Equipment Is Not Supported by Its Cost-Benefit Analysis. GAO-07-581T. Washington, D.C.: March.14, 2007. Nuclear Nonproliferation: Focusing on the highest Priority Radiological Sources Could Improve DOE’s Efforts to Secure Sources in Foreign Countries. GAO-07-580T. Washington, D.C.: March. 13, 2007. Combating Nuclear Smuggling: DNDO Has Not Yet Collected Most of the National Laboratories’ Test Results on Radiation Portal Monitors in Support of DNDO’s Testing and Development Program. GAO-07-347R. Washington, D.C.: March 9, 2007. Technology Assessment: Securing the Transport of Cargo Containers. GAO-06-68SU. Washington, D.C.: January 25, 2006. Combating Nuclear Smuggling: DHS’s Cost-Benefit Analysis to Support the Purchase of New Radiation Detection Portal Monitors Was Not Based on Available Performance Data and Did Not Fully Evaluate All the Monitors’ Costs and Benefits. GAO-07-133R. Washington, D.C.: October 17, 2006. Combating Nuclear Terrorism: Federal Efforts to Respond to Nuclear and Radiological Threats and to Protect Emergency Response Capabilities Could Be Strengthened. GAO-06-1015. Washington, D.C.: September 21, 2006. Border Security: Investigators Transported Radioactive Sources Across Our Nation’s Borders at Two Locations. GAO-06-940T. Washington, D.C.: July 7, 2006. Combating Nuclear Smuggling: Challenges Facing U.S. Efforts to Deploy Radiation Detection Equipment in Other Countries and in the United States. GAO-06-558T. Washington, D.C.: March 28, 2006. Combating Nuclear Smuggling: DHS Has Made Progress Deploying Radiation Detection Equipment at U.S. Ports-of-Entry, but Concerns Remain. GAO-06-389. Washington, D.C.: March 22, 2006. Combating Nuclear Smuggling: Corruption, Maintenance, and Coordination Problems Challenge U.S. Efforts to Provide Radiation Detection Equipment to Other Countries. GAO-06-311. Washington, D.C.: March 14, 2006. Combating Nuclear Smuggling: Efforts to Deploy Radiation Detection Equipment in the United States and in Other Countries. GAO-05-840T. Washington, D.C.: June 21, 2005. Preventing Nuclear Smuggling: DOE Has Made Limited Progress in Installing Radiation Detection Equipment at Highest Priority Foreign Seaports. GAO-05-375. Washington, D.C.: March 31, 2005. Homeland Security: DHS Needs a Strategy to Use DOE’s Laboratories for Research on Nuclear, Biological, and Chemical Detection and Response Technologies. GAO-04-653. Washington, D.C.: May 24, 2004. Homeland Security: Summary of Challenges Faced in Targeting Oceangoing Cargo Containers for Inspection. GAO-04-557T. Washington, D.C.: March 31, 2004). Homeland Security: Preliminary Observations on Efforts to Target Security Inspections of Cargo Containers. GAO-04-325T. Washington, D.C.: December 16, 2003. Homeland Security: Radiation Detection Equipment at U.S. Ports of Entry. GAO-03-1153TNI. Washington, D.C.: September 30, 2003. Homeland Security: Limited Progress in Deploying Radiation Detection Equipment at U.S. Ports of Entry. GAO-03-963. Washington, D.C.: September 4, 2003). Container Security: Current Efforts to Detect Nuclear Materials, New Initiatives, and Challenges. GAO-03-297T. Washington, D.C.: November 18, 2002. Customs Service: Acquisition and Deployment of Radiation Detection Equipment. GAO-03-235T. Washington, D.C.: October 17, 2002. Nuclear Nonproliferation: U.S. Efforts to Combat Nuclear Smuggling. GAO-02-989T. Washington, D.C.: July 30, 2002. Nuclear Nonproliferation: U.S. Efforts to Help Other Countries Combat Nuclear Smuggling Need Strengthened Coordination and Planning. GAO-02-426. Washington, D.C.: May 16, 2002. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | The Department of Homeland Security's (DHS) Domestic Nuclear Detection Office (DNDO) is responsible for addressing the threat of nuclear smuggling. Radiation detection portal monitors are key elements in our national defenses against such threats. DHS has sponsored testing to develop new monitors, known as advanced spectroscopic portal (ASP) monitors. In March 2006, GAO recommended that DNDO conduct a cost-benefit analysis to determine whether the new portal monitors were worth the additional cost. In June 2006, DNDO issued its analysis. In October 2006, GAO concluded that DNDO did not provide a sound analytical basis for its decision to purchase and deploy ASP technology and recommended further testing of ASPs. DNDO conducted this ASP testing at the Nevada Test Site (NTS) between February and March 2007. GAO's statement addresses the test methods DNDO used to demonstrate the performance capabilities of the ASPs and whether the NTS test results should be relied upon to make a full-scale production decision. Based on our analysis of DNDO's test plan, the test results, and discussions with experts from four national laboratories, we are concerned that DNDO's tests were not an objective and rigorous assessment of the ASPs' capabilities. Our concerns with the DNDO's test methods include the following: (1) DNDO used biased test methods that enhanced the performance of the ASPs. Specifically, DNDO conducted numerous preliminary runs of almost all of the materials, and combinations of materials, that were used in the formal tests and then allowed ASP contractors to collect test data and adjust their systems to identify these materials. It is highly unlikely that such favorable circumstances would present themselves under real world conditions. (2) DNDO's NTS tests were not designed to test the limitations of the ASPs' detection capabilities--a critical oversight in DNDO's original test plan. DNDO did not use a sufficient amount of the type of materials that would mask or hide dangerous sources and that ASPs would likely encounter at ports of entry. DOE and national laboratory officials raised these concerns to DNDO in November 2006. However, DNDO officials rejected their suggestion of including additional and more challenging masking materials because, according to DNDO, there would not be sufficient time to obtain them based on the deadline imposed by obtaining Secretarial Certification by June 26. 2007. By not collaborating with DOE until late in the test planning process, DNDO missed an important opportunity to procure a broader, more representative set of well-vetted and characterized masking materials. (3) DNDO did not objectively test the performance of handheld detectors because they did not use a critical CBP standard operating procedure that is fundamental to this equipment's performance in the field. Because of concerns raised that DNDO did not sufficiently test the limitations of ASPs, DNDO is attempting to compensate for weaknesses in the original test plan by conducting additional studies--essentially computer simulations. While DNDO, CBP, and DOE have now reached an agreement to wait and see whether the results of these studies will provide useful data regarding the ASPs' capabilities, in our view and those of other experts, computer simulations are not as good as actual testing with nuclear and masking materials. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The Small Business Innovation Development Act of 1982 provided for a three-phase program. Phase I is intended to determine the scientific and technical merit and feasibility of a proposed research idea. Work in phase II further develops the idea, taking into consideration such things as the commercialization potential. Phase III generally involves the use of nonfederal funds for the commercial application of a technology or non-SBIR federal funds for continued R&D under government contracts. The Small Business Research and Development Enhancement Act of 1992 reauthorized the SBIR program through fiscal year 2000. The act emphasized the program’s goal of increasing private sector commercialization and provided for incremental increases in SBIR funding up to not less than 2.5 percent of agencies’ extramural R&D budgets by fiscal year 1997. Moreover, the act directed SBA to modify its policy directive to reflect an increase in funding for eligible small businesses, that is, businesses with 500 or fewer employees. This increased funding from $50,000 to $100,000 for phase I and from $500,000 to $750,000 for phase II, with adjustments once every 5 years for inflation and changes in the program. The agencies’ SBIR officials reported that they have adhered to the act’s requirement of not using SBIR funds to pay for the administrative costs of the program, such as salaries and support services used in processing awards. However, they added that the funding restriction has limited their ability to provide some needed administrative support. The program officials also believe that they are adhering to the statutory requirement to fund the program at 2.5 percent of agencies’ extramural research budget. Some of the officials expressed concern because they believe that agencies are using different interpretations of the “extramural budget” definition. This may lead to incorrect calculations of their extramural research budgets. For example, according to DOD’s SBIR program manager, all eight of DOD’s participating military departments and defense agencies that make up the SBIR program have differing views on what each considers an extramural activity and on the appropriate method for tracking extramural R&D obligations. As a result, the program and budget staff have not always agreed on the dollar amount designated as the extramural budget. Of the five agencies we reviewed, only two have recently audited their extramural R&D budgets. Both NSF and NASA conducted audits of their extramural R&D budgets in fiscal year 1997. DOD, DOE, and NIH have not conducted any audits of their extramural R&D budgets nor do they plan to conduct any audits in the near future. NSF’s audit, which was performed by its Inspector General, concluded that NSF was overestimating the size of its extramural R&D budget by including unallowable costs, such as education, training, and overhead. NSF estimated that these unallowable costs totaled over $100 million. The Inspector General’s audit report concluded that by excluding these “unallowables,” NSF will have reduced the funds available for the SBIR program by approximately $13 million over a 5-year period. Likewise, NASA has completed a survey of fiscal year 1995 budget data and is currently reviewing fiscal year 1996 data at its various field centers. NASA officials say this is an effort to (1) determine the amount spent on R&D and (2) categorize the R&D as either intramural or extramural activities. Most of the SBIR officials we interviewed believed that neither the application review process nor current funding cycles are having an adverse effect on award recipients’ financial status or their ability to commercialize their projects. Specifically, DOD, DOE, NSF, and NASA stated that their respective review processes and funding cycles have little to no adverse effect on the recipients’ financial status or the small companies’ ability to commercialize their technologies. Furthermore, NIH believes that having three funding cycles in each year has had a beneficial effect on applicants. SBIR officials did say that some recipients had said that any interruption in funding awards, for whatever reason, affects them negatively. One SBIR program manager stated that at DOD, most award recipients often have no way of paying their research teams during a funding gap. As a result, ongoing research may be delayed, and the “time-to-market”—that is the length of time from the point when research is completed to the point when the results of the research are commercialized—may be severely impaired, thus limiting a company’s commercial potential. As a result, most of the participating SBIR agencies have established special programs and/or processes in an effort to mitigate any adverse effect(s) caused by funding gaps. One such effort is the Fast Track Program, employed at DOD, whereby phase I award recipients who are able to attract third-party funding are given the highest priority in the processing of phase II awards. At DOE and NIH, phase I award recipients are allowed to submit phase II applications prior to the completion of phase I. NASA has established an electronic SBIR management system to reduce the total processing time for awards and is currently exploring the possibility of instituting a fast track program similar to that of DOD. The third phase of SBIR projects is expected to result in commercialization or a continuation of the project’s R&D. In 1991, we surveyed 2,090 phase II awards that had been made from 1984 through 1987 regarding their phase III activity. In 1996, DOD conducted its own survey, which closely followed our format. DOD’s survey included all 2,828 of DOD’s SBIR projects that received a phase II award from 1984 through 1992. While analyzing the response data from our 1991 survey, we found that approximately half of the phase II awards reported phase III activity (e.g., sales and additional funding) while the other half had no phase III activity. (See table 1.) Overall, 515 responses, or 35 percent, indicated that their projects had resulted in sales of products or processes, while 691, or 47 percent, had received additional developmental funding. Our analysis of DOD’s 1996 survey responses showed that phase III activity was occurring at similar rates to GAO’s survey. Our analysis of these responses showed that 765 projects, or 53 percent, reported that they were active in phase III at the time of the survey, while the other half did not report any phase III activity. The DOD respondents indicated that 442 awards, or 32 percent, had resulted in actual sales, while 588 reported the awards had resulted in additional developmental funding. Agencies are currently using various techniques to foster commercialization, although there is little or no empirical evidence suggesting how successful particular techniques have been. For example, in an attempt to get those companies with the greatest potential for commercial success to the marketplace sooner, DOD has instituted a Fast Track Program, whereby companies that are able to attract outside commitments/capital for their research during phase I are given higher priority in receiving a phase II award. The Fast Track Program not only helps speed these companies along this path but also helps them attract outside capital early and on better terms by allowing the companies to leverage SBIR funds. In 1996, for example, DOD’s Fast Track participants were able to attract $25 million in outside investment. Additionally, DOD, in conjunction with NSF and SBA, sponsors three national SBIR conferences annually. These conferences introduce small businesses to SBIR and assist SBIR participants in the preparation of SBIR proposals, business planning, strategic partnering, market research, the protection of intellectual property, and other skills needed for the successful development and commercialization of SBIR technologies. DOE’s Commercialization Assistance Program provides phase II award recipients with individualized assistance in preparing business plans and presentation materials to potential partners or investors. This program culminates in a Commercialization Opportunity Forum, which helps link SBIR phase II award recipients with potential partners and investors. NSF provides (1) its phase I award recipients with in-depth training on how to market to government agencies and (2) its phase I and II award recipients with instructional guides on how to commercialize their research. Similarly, NASA assists its SBIR participants through numerous workshops and forums that provide companies with information on how to expand their business. NASA also provides opportunities for SBIR companies to showcase their technologies to larger governmental and commercial audiences. Moreover, NASA has established an SBIR homepage on the Internet to help promote its SBIR technologies and SBIR firms and has utilized several of its publications as a way for SBIR companies to make their technologies known to broader audiences. Using SBA’s data, we identified phase I award recipients who had received 15 or more phase II awards in the preceding 5 years. On the basis of survey data from both GAO’s and DOD’s surveys, we compared the commercialization rates as well as the rates at which projects received additional developmental funding for these multiple-award recipients with the non-multiple-award recipients. This comparison of the phase III activity is summarized in table 2. This analysis shows that the multiple-award recipients and the non-multiple-award recipients are commercializing at comparable rates. According to both surveys, multiple-award recipients receive additional developmental funding at higher rates than the non-multiple-award recipients. However, the average levels of sales and additional developmental funding for the multiple-award recipients are lower than those for non-multiple-award recipients. When an agency funds research for a given solicitation topic where only one proposal was received, it may appear that there was a lack of competition. The majority of the SBIR officials we interviewed indicated that receiving a single proposal for a given solicitation topic is extremely rare. DOD reported that from 1992-96 there were only three instances when a single proposal was submitted for a given solicitation topic out of 30,000 proposals that were received for various solicitations. DOD’s SBIR official did state, however, that none of the cases resulted in an award. Both DOE’s and NASA’s SBIR officials reported that they did not receive any single proposals for this time period. Moreover, NASA’s SBIR officials stated that their policy is to revise a solicitation topic/subtopic if it receives fewer than 10 proposals or to drop the topic/subtopic from the solicitation. One of the purposes of the 1992 act was to improve the federal government’s dissemination of information concerning the SBIR program, particularly with regard to program participation by women-owned small businesses and by socially and economically disadvantaged small businesses. All of the agencies we reviewed reported participating in activities targeted at women-owned or socially and economically disadvantaged small businesses. All SBIR program managers participate each year in a number of regional small business conferences and workshops that are specifically designed to foster increased participation in the SBIR program by women-owned and socially and economically disadvantaged small businesses. The SBIR managers also participate in national SBIR conferences that feature sessions on R&D and procurement opportunities in the federal government that are available to socially and economically disadvantaged companies. Most of the SBIR agency officials we interviewed stated that they use the two listings of critical technologies as identified by DOD and the National Critical Technologies Panel in developing their respective research topics. The other agencies believed that the research being conducted falls within one of the two lists. At DOE, for example, research topics are developed by the DOE technical programs that contribute to SBIR. In DOE’s annual call for topics, SBIR offices are instructed to give special consideration to topics that further one or more of the national critical technologies. DOE’s analysis of the topics that appeared in its fiscal year 1995 solicitation revealed that 75 percent of the subtopics listed contributed to one or more of the national critical technologies. Likewise, NASA’s research topics, developed by its SBIR offices, reflect the agency’s priorities that are originally developed in accordance with the nationally identified critical technologies. At DOD, SBIR topics that do not support one of the critical technologies identified by DOD will not be included in DOD’s solicitation. Both NIH and NSF believe that their solicitation topics naturally fall within one of the lists. According to NIH’s SBIR official, although research topics are not developed with these critical technologies in mind, their mission usually fits within these topics. For example, research involving biomedical and behavioral issues is very broad and can be applied to similar technologies defined by the National Critical Technologies Panel. NSF’s SBIR official echoes the sentiments of NIH. According to this official, although NSF has not attempted to match topics with the listing of critical technologies, it believes that the topics, by their very nature, fall within the two lists. According to our 1991 survey and DOD’s 1996 survey, SBIR projects result in little business-related activity with foreign firms. For example, our 1991 survey found that 4.6 percent of the respondents reported licensing agreements with foreign firms and that 6 percent reported marketing agreements with foreign firms. It should also be remembered that both of these agreements refer to activities where the U.S. firm is receiving benefits from the SBIR technology and still maintaining rights to the technology. Sales of the technology or rights to the technology occurred at a much lower rate, 1.5 percent, according to our survey. The DOD survey showed similar results. These data showed that less than 2 percent of the respondents had finalized licensing agreements with foreign firms and that approximately 2.5 percent had finalized marketing agreements with foreign firms. Sales of the technology or the rights to the technology developed with SBIR funds occurred only 0.4 percent of the time. A recent SBA study stated that one-third of the states received 85 percent of all SBIR awards and SBIR funds. In fiscal year 1996, the states of California and Massachusetts had the highest concentrations of awards, 904 awards for a total of $207 million and 628 awards for a total of $148 million, respectively. However, each state has received at least two awards, and in 1996, the total SBIR amounts received by states ranged from $120,000 to $207 million. The SBA study points out that 17 states receive the bulk of U.S. R&D expenditures, venture capital investments, and academic research funds. Hence, the study observes that the number of small high-tech firms in a state, its R&D resources, and venture capital are important factors in the distribution and success of SBIR awards. Mr. Chairman, this concludes my statement. I would be happy to respond to any questions you or the members of the Committee may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed the Small Business Innovation Research (SBIR) program, focusing on: (1) agencies' adherence to statutory funding requirements; (2) agencies' audits of extramural (external) research and development (R&D) budgets; (3) the effect of the application review process and funding cycles on award recipients; (4) the extent of companies' project activity after receiving SBIR funding and agencies' techniques to foster commercialization; (5) the number of multiple award recipients and the extent of project activity after receiving SBIR funding; (6) the occurrence of funding for single proposal awards; (7) participation by women-owned business and socially and economically disadvantaged business; (8) SBIR's promotion of the critical technologies; (9) the extent foreign firms benefit from SBIR results; and (10) the geographical distribution of SBIR awards. GAO noted that: (1) agencies have adhered to the Small Business Research and Development Enhancement Act's funding requirements; (2) agency program officials reported that they are not using SBIR funds to pay for administrative costs of the program; (3) program officials believe that they are adhering to the statutory requirement to fund the program at 2.5 percent of agencies' extramural budget; (4) some officials believe that agencies are using different interpretations of the extramural budget definition, which may lead to incorrect calculations; (5) of the five agencies reviewed, only two have conducted audits of their extramural budgets; (6) while most SBIR officials interviewed said that neither the application review process nor current funding cycles have had an adverse effect on award recipients' financial status or ability to commercialize, some recipients have said that any interruption in funding awards affects them negatively; (7) most participating SBIR agencies have established programs to minimize funding gaps; (8) companies reported that approximately 50 percent of their projects had sales of products or services related to research or received additional developmental funding after receiving SBIR funding; (9) the agencies identified various techniques to foster the commercialization of SBIR-funded technologies; (10) the number of companies receiving multiple awards had grown from 10 companies in 1989 to 17 in 1996; (11) multiple-award recipients and non-multiple-award recipients commercialized at almost identical rates; (12) agencies rarely fund research for a given solicitation topic where only one proposal was received; (13) of the five agencies examined, all reported engaging in activities to foster the participation of women-owned or socially and economically disadvantaged small businesses; (14) all agencies' SBIR officials believed that the listings of critical technologies are used in developing their respective research topics or that the research being conducted falls within one of the two lists; (15) little evidence of foreign firms benefiting from technology or products developed as a direct result of SBIR-funded research; (16) a Small Business Administration study reported that one-third of the states received 85 percent of all SBIR awards and funds; and (17) previous studies of SBIR have linked the concentration of awards to local characteristics. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Beginning in the mid 1990s, more than 40 states and some localities sued tobacco companies, alleging that the industry violated antitrust and consumer protection laws, withheld information about the adverse health effects of tobacco, manipulated nicotine levels to keep smokers addicted, and conspired to hold back less risky and less addictive tobacco products from the market. In 1997 and 1998, four states—Florida, Minnesota, Mississippi, and Texas—settled their lawsuits by negotiating independent agreements with the tobacco industry. In November 1998, four of the nation’s largest tobacco companies—Philip Morris Incorporated, R.J. Reynolds Tobacco Company, Brown & Williamson Tobacco Corporation, and Lorillard Tobacco Company (referred to as the “original participating manufacturers”)— negotiated an agreement with the attorneys general of the remaining 46 states thereby settling a number of lawsuits brought by these states against these tobacco companies. The terms of this agreement, known as the Master Settlement Agreement (MSA), apply only to those tobacco companies and states that are parties to the agreement. Under the MSA, the tobacco companies are required to provide monetary relief to states in the form of annual payments and reimbursement for attorney fees. The MSA also imposes restrictions on the tobacco companies’ marketing and advertising practices. Furthermore, the MSA established a national foundation to support study and programs to (1) reduce youth tobacco use and substance abuse and (2) prevent diseases associated with tobacco use. Tobacco companies are required to provide funding for this foundation, as well as funding for the National Association of Attorneys General (NAAG), which is responsible for assisting states in the implementation and enforcement of the MSA. After the MSA was signed, each state had to take action to receive approval of the agreement from its respective state court in order to make the terms of the agreement legally binding within that state. Under the MSA, once state court approval was final, the state achieved “state-specific finality” status, thereby permitting that state to receive payments under the MSA. No state payments were to be released to any of the states, however, until the agreement reached final approval. This occurred in November 1999 when 80 percent of the states whose shares equaled 80 percent of the total settlement payments had achieved state-specific finality. In addition, to receive its full share of the settlement payments, each state was required by the MSA to enact a statute addressing the potential competitive advantage that tobacco companies not party to the MSA may experience. Under the MSA, if the aggregate market share of the tobacco companies that are party to the agreement (“participating manufacturers”) falls more than two percent below their base level of 1997 and the loss is caused in significant part by provisions of the MSA, the MSA payments may be reduced based on a formula that corrects for this market share loss. The MSA provided that individual states can avoid this downward adjustment—known as the “non-participating manufacturers” (NPM) adjustment—to their payments by enacting and enforcing a statute that is intended to prevent a competitive disadvantage for the participating manufacturers. The MSA provided a model statute that, if enacted and enforced by a state, would protect that state from any adjustment for market share loss, although states are permitted to enact and enforce any statute that achieves the same desired result. The MSA also placed restrictions on the tobacco companies’ business practices, primarily in marketing targeted to youth, advertising, and lobbying. For example, the MSA banned all outdoor advertising by the tobacco companies such as billboards and signs in arenas and stadiums, as well as sponsorship of sporting events with a significant youth audience. Moreover, the tobacco companies are prohibited from lobbying the state or any political subdivision against efforts to enact certain kinds of state laws and regulations intended to reduce underage tobacco access and use. Tobacco companies are not prohibited from lobbying against legislation that would raise excise taxes or restrict smoking in public places. The MSA also required the tobacco companies to pay a total of $50 million for enforcement activities including state enforcement of the terms of the agreement and investigation of suspected violations of antitrust or consumer protection laws related to tobacco products. In addition, the MSA required the tobacco companies to fund a national foundation, the American Legacy Foundation, dedicated to discouraging youth tobacco use and to preventing disease associated with tobacco use through supporting study and education. The participating tobacco companies are required to pay a total of $1.45 billion over 5 years for the advertising and education programs (performed directly or through grant-making) aimed at countering youth tobacco use and informing consumers about prevention of tobacco-related diseases, and an additional $250 million over 10 years for other activities of the foundation. The MSA was preceded by a proposed national settlement between the states and the tobacco industry reached in June 1997. This earlier more far- reaching proposal included payments to states and was a blueprint for a comprehensive national tobacco-control policy, including federal regulation and oversight. The June 1997 proposal could take effect only after federal legislation was enacted. Several comprehensive tobacco policy bills, including legislation to implement the June 1997 proposal, were introduced in the 105th Congress. However, only the National Tobacco Policy and Youth Smoking Reduction Act (S. 1415), introduced by Senator McCain, saw legislative action. The bill debated on the Senate floor provided for new authority for the Food and Drug Administration to regulate tobacco products, measures to restrict tobacco industry marketing and advertising, and measures to reduce underage tobacco use. The bill also required up-front and annual payments by the tobacco companies to provide for settlement of relevant state lawsuits. These and other payments would be deposited into a fund for the benefit of states that settled their lawsuits against the tobacco companies and for the benefit of the federal government. When S. 1415 did not pass in the summer of 1998, states resumed negotiations with the tobacco industry that eventually resulted in the November 1998 Master Settlement Agreement. The MSA was a scaled- down version of the June 1997 proposal and did not require federal action to be implemented. This agreement did not resolve states’ uncertainty over whether the federal government might lay claim to a portion of the payments to the states. In May 1999, Congress moved to resolve that uncertainty by enacting legislation that prohibited treating states’ MSA payments as federal overpayments for purposes of Medicaid. As of April 2001, 45 of the 46 states that signed the Master Settlement Agreement had received nearly $13.5 billion in payments from the tobacco companies. MSA payments to the states, some of which states will receive in perpetuity, were originally estimated to total nearly $205 billion through 2025. There are different types of payments, the largest two of which are “initial” payments— made in five installments through 2003— and “annual” payments which continue in perpetuity. Both of these types of payments are distributed based on “allocation percentages” for each state agreed to by the 46 state attorneys general when they negotiated the MSA. (See appendix IV for the types of MSA payments.) The final agreement resulted from negotiations that began with a formula. However, unlike many other legal settlements with a fixed level of compensation, while the MSA payments are based on set payment amounts, these payments are adjusted for several factors, most notably, the future sales of the tobacco industry. Each state’s payments are adjusted annually based on the participating manufacturers’ cigarette sales and market share, as well as inflation. All adjustments resulted in reductions of about $1.6 billion between 1999 and 2001. The formula that provided the basis for determining the allocation percentages for the MSA payments was composed of two variables each weighted equally: smoking-related Medicaid expenditures and smoking- related non-Medicaid health care costs of each state. The smoking-related health care cost variable included factors for each state’s population and smoking prevalence. After this initial formula was developed, negotiations resulted in some adjustments for state-specific concerns. For example, some smaller states argued that they should receive a larger percentage to enable them to fund smoking cessation programs because they did not have the same economy of scale as larger states. The negotiations resulted in the allocation percentages that are applied to each initial and annual MSA payment. In general, larger states receive a higher percentage of each payment and smaller states receive a lower percentage, however because the allocation percentages were determined by negotiations the payments are not strictly proportional to population. Table 1 shows the final state allocation percentages as explicitly agreed to in the MSA. Prior to the MSA, some counties in California and New York had independently filed lawsuits against the tobacco industry. In these states, the counties bear financial responsibility for a share of Medicaid costs, and the lawsuits sought compensation for the counties’ cost of treating smoking related illnesses. In both these states, under different arrangements, counties receive a share of MSA payments. The state of California had entered into a memorandum of understanding (MOU) with its counties and four major cities in August 1998—prior to the MSA—to coordinate their lawsuits with the state’s suit and provide for the allocation of any settlement. The terms of the MOU included an even 50/50 split of the financial recovery between the state and local governments, with the local share further split between the counties and four major cities. In California’s case, all MSA payments are made to the state and the state distributes payments to the 58 counties and four cities. (See appendix II for the counties’ and cities’ share of payments in California.) In the case of New York, the state’s consent decree provides for allocation of a portion of its MSA payments to the counties and New York City based on the county share of Medicaid costs and population as well as some specific considerations for individual counties. In New York’s case, each of the state’s 57 counties and New York City receive payment directly from the escrow account established by the MSA rather than the state receiving all payments and then distributing them to the localities. (See appendix III for the counties’ and New York City’s share of payments in New York.) As explained in the introduction to this report, this study focuses on how states are using their MSA payments, and we did not track the counties’ use of MSA payments. Currently, states receive two types of payments as a result of the MSA— annual payments and initial payments. Although there are several types of potential adjustments to the annual payments received by each state, the two most significant adjustments are a “volume adjustment” and an “inflation adjustment.” The volume adjustment is based on increases or decreases in the number of cigarettes shipped by the original participating manufacturers, and the inflation adjustment is set at the actual percentage increase in the Consumer Price Index (CPI) or 3 percent, whichever is greater. The terms of the MSA also call for states to receive five initial payments between 1998 and 2003. These initial payments are also subject to annual volume adjustments, but they are not adjusted for inflation after the first payment. (See appendix IV for a summary of payment types and amounts.) A third type of payment, known as the Strategic Contribution Fund (SCF) payment, will begin in 2008 and continue through 2017. The base amount of each year’s SCF payment is $861 million and will be adjusted for volume and inflation. SCF payments are intended to reflect the level of the contribution each state made toward final resolution of the state lawsuits against the tobacco companies and will be allocated to the states based on a separate formula developed by a panel of former state attorneys general. (See appendix V for estimated Strategic Contribution Fund payments to states.) Finally, tobacco growers and producers in states that grow cigarette tobacco also receive a fourth type of payment through a separate agreement, the National Tobacco Grower Settlement Trust Agreement, known as “Phase II.” The MSA required the tobacco companies to meet with the political leadership of states with grower communities to address the economic concerns of these communities. The Phase II agreement resulted from that requirement. (See appendix VI for information on the Phase II agreement.) This agreement is intended to provide compensation for financial losses due to the anticipated decline in cigarette consumption and payments to the trust fund are expected to total $5.15 billion over 12 years. This report does not track Phase II payments to states or the allocation of these payments. Table 2 summarizes the types of payments that states will receive as a result of the MSA and Phase II. (See appendix IV for estimated payment amounts for the first 25 years of the MSA.) States received their first MSA payments at different points in time based on the date the agreement became final in their state (referred to as having achieved “state-specific finality”). Forty-three states received their first payment in state fiscal year 2000. Arkansas and Tennessee received their first payments in fiscal year 2001. Since Missouri did not achieve state- specific finality until late April 2001, its payments were not included in the total payments received through April 2001. The first MSA payments were made in December 1999, and as of April 2001 all initial and annual payments combined totaled nearly $13.5 billion. States are not scheduled to receive any more payments until January 2002. California and New York have received the largest amounts so far—nearly $1.8 billion each. Together, six states received more than 50 percent of all the MSA payments from 1999 through 2001: California, Illinois, Michigan, New York, Ohio, and Pennsylvania. Table 3 shows the breakdown of expected payments by state, as originally estimated at the time of the Master Settlement Agreement, as well as the actual payments received as of April 2001. As noted above, payments are adjusted for a number of factors such as fluctuations in the volume of cigarette sales, inflation, and changes in participating manufacturers’ market share. The combined effect of all adjustments has been to lower payments by about $1.6 billion—or nearly 11 percent below the original estimate. The 45 states that had reached state-specific finality and received payments were originally estimated to receive $15.1 billion through April 2001 but actually received nearly $13.5 billion during this period—an overall reduction of about $1.6 billion. The adjustments varied by state, from a high of 26.6 percent in Pennsylvania to a low of 6.7 percent in Arkansas. Consumption has declined since the Master Settlement Agreement was signed in November 1998—by about 6.5 percent in 1999 alone—mostly due to one-time increases in cigarette prices that the tobacco companies implemented after the MSA took effect. Analysts project that in the future total cigarette consumption will decline by an average of nearly 2 percent per year. As a result, cigarette consumption is estimated to decline by 33 percent between 1999 and 2020. Declining consumption will result in lower MSA payments than originally expected. Offsetting the sales volume decline is the adjustment for inflation. The inflation adjustment equals the actual percentage increase in the CPI for the preceding year or 3 percent, whichever is greater. The effect of compounding, especially given that the payments are made in perpetuity, is significant. Assuming a 3-percent inflation adjustment and no decline in base payments, settlement amounts received by states would double every 24 years. Some analysts estimate the positive inflationary adjustments to be greater than any negative adjustments for consumption. Adjustments were also made for losses in participating manufacturers’ market share. The NPM adjustment encourages states to enact a model statute in order to receive their full share of MSA payments. Because they had not enacted a model statute by the end of 2000, 16 states had amounts withheld from their January 2001 payments. An independent auditor initially determines how much, if any, market share has been lost and reduces the MSA payments for this loss. However, amounts withheld from the payments are held in escrow pending a final determination by an independent team of economists as to whether the market share loss was a result of the MSA. As of April 2001, all states had enacted model statutes, so the NPM adjustment will not affect future payments. MSA payments are not the only source of tobacco-related revenue. State excise taxes on tobacco products represent a state-controlled source of tobacco-related revenue for all 50 states, although cigarette tax rates vary widely—from a low of 2.5 cents a pack in Virginia to a high of $1.11 in New York. The 46 MSA states collected nearly $7 billion in revenues in 2000 from excise taxes on cigarettes, which were not directly affected by the MSA. Between January 1999 and January 2001, four of the 46 MSA states— Louisiana, Maryland, New Hampshire, and New York—increased their tax rates on cigarettes. These increases drove the average cigarette tax rate in the 46 states up by about 5 percent over two years, from 39.8 cents in January 1999 to 41.8 cents in January 2001. Most state legislatures viewed the MSA payments as a discrete funding stream and engaged in a structured decision-making process to determine long-term uses for these revenues. Although most states will continue to appropriate MSA payments through an annual or a biennial budget process, those appropriations will be guided by long-term legislation earmarking the use of the funding stream for specific purposes. As part of the decision- making process, some states established planning commissions and working groups to develop recommendations that resulted in a strategic plan for the state’s use of the funds. In six states voter-approved initiatives restricted the use of the funds. Forty-two of the 46 states have made decisions about the allocation of MSA payments, and in 30 of these states the legislature enacted laws to ensure that these payments are restricted or used for specific purposes. Of the states with these legislative goals, almost all established dedicated funds that separate the MSA payments from other state funding sources. New York did not establish dedicated funds but enacted restrictions on the use of the payments which are deposited directly into the state’s general fund. Six states (Alaska, California, Georgia, Illinois, New Jersey, and Rhode Island) had not earmarked the payments deposited into the state’s general fund; in these states decisions on uses of the MSA payments were made as part of the annual appropriations process. The states have engaged in a decision-making process involving considerable deliberations over the long-term use of MSA payments. In some states, a permanently established board or a special committee makes recommendations and oversees the use of a portion of the payments. Other states, including Maryland and Ohio, engaged in a comprehensive planning process to develop initial recommendations for use of the MSA payments. In Maryland, the Governor convened three task forces, each focused on one of three areas—smoking cessation, health, and agricultural initiatives. Composed of legislators, experts in the field, and community and business representatives in each of these areas, these groups developed recommendations for each program area. Each task force prepared an implementation plan and presented a final report to the Governor that was used to develop a 10-year budget proposal. In Ohio, a bipartisan task force composed of representatives from the legislature and the Governor’s administration developed recommendations that resulted in legislation creating a long-term plan for allocation of the MSA payments. The plan allocates the payments for specific purposes through fiscal year 2012 and establishes three new commissions and foundations. The plan also requires the state’s Tobacco Oversight Accountability Panel to develop benchmarks for each of seven dedicated funds that were created. In seven states ballot initiatives were proposed by the legislature to restrict the use of some portion of the MSA payments, and in six of the states these proposals were approved by voter referendum. In Arizona and Arkansas laws were enacted, and in Louisiana, Montana, Oklahoma, and Utah constitutional amendments proposed by the legislature were approved. All of the ballot initiatives proposed the creation of dedicated funds to restrict at least a portion of the MSA payments. In some of the states the ballot initiatives were supported by local health advocacy organizations. In four of these states (Arkansas, Louisiana, Montana, and Oklahoma), portions of the endowment funds are earmarked for tobacco control and health care programs. In Arizona, the ballot initiative dedicated the full amount of MSA payments to expanding eligibility for the state’s health insurance program. In Utah, an endowment fund was established, but the fund was not dedicated to any particular purpose. These initiatives become effective between fiscal years 2000 and 2002, and in some states, the proportion of MSA payments allocated for specific purposes increases over the first few years of the agreement in order to reach a specified level of funding. In Oregon, two ballot initiatives proposed by the legislature were defeated by the voters in the November 2000 election. Both proposals would have dedicated all MSA payments to special funds allowing only the earnings on the principal to be spent. One of the initiatives would have earmarked the MSA payments for the state’s health insurance program, maximizing funding for the State Children’s Health Insurance Program (SCHIP) in particular, and the other proposed allocating funds for health care and tobacco control as well as other social services. As both proposals were defeated, the decision over allocation of MSA payments was referred back to the legislature. Oregon, Pennsylvania, Tennessee and Missouri had not reached decisions about the use of the payments as of April 2001. In Oregon, after the defeat of the two ballot initiatives the Governor’s budget recommended earmarking the MSA payments for health care and tobacco control programs and establishing a dedicated fund for the majority of the payments. In Pennsylvania, the Governor submitted budget recommendations for the use of the payments, but the legislature had not acted on these proposals. The Governor’s proposed “Health Investment Plan” for the MSA payments presented principles developed with public input to guide use of the MSA payments and recommended dedicating the payments for health care and tobacco control programs. In Tennessee, the legislature earmarked the payments for two purposes—agriculture and health—and established two ad hoc committees to develop recommendations on the specific uses of the funds. The committees held public hearings, developed proposals for program oversight and funding, and presented their final reports in February 2001 for consideration by the General Assembly. These three states have placed their payments in holding accounts until final decisions are made. Missouri achieved state- specific finality in late April 2001 and had not received any MSA payments during the period of our study. Thirty-six of the 46 MSA states established dedicated funds to separate at least a portion the MSA payments from other state funds and dedicate their use for specific purposes. In many cases, both the principal and investment earnings of these funds are available for expenditure, while in other cases only the earnings may be used. For simplicity, in this report we refer to the former as special funds and the latter as endowment funds. Endowment funds are intended to ensure a long-term source of funding for programs. In many cases, boards and/or commissions oversee these funds. In some cases these bodies make recommendations for the use of the funds and in other cases they have the authority to make decisions and distribute the funds in keeping with the dedicated uses of the funds. Although over three- fourths of the states established dedicated funds for their MSA payments, only about 35 percent of the total payments were allocated to these funds during fiscal year 2001. Of this 35 percent, about 28 percent were in special funds and the remaining 7 percent in endowment funds. Table 4 shows the funds established by each of the states and how the fiscal year 2001 MSA payments in each state were allocated among fund types. In establishing dedicated funds, several state legislatures opted to delegate decision-making authority over use of the funds to boards and/or commissions. For example, Virginia created the Tobacco Indemnification and Community Revitalization Commission (TICR). The Commission is composed of state legislators, agency heads, representatives of the agricultural community, and other citizens. While the MSA payments to the TICR fund may only be used for payments to tobacco farmers and economic development in tobacco communities, the Commission determines the specific allocations from the fund. In Oklahoma, voters approved the creation of the Tobacco Settlement Endowment Trust Fund and a Board of Directors distributes the earnings of the fund among specified programs. Ohio created two new foundations that receive MSA payments, the Tobacco Cessation and Control Foundation and the Southern Ohio Agricultural and Community Development Foundation. Each of these foundations is governed by a separate board of trustees. The Master Settlement Agreement does not require states to use the payments for any particular purpose and states had varying views of the settlement payments. Because claims for compensation for past health care costs, including Medicaid, were the basis for many of the initial lawsuits filed by the states, many states gave high priority to the use of MSA payments for health related funding and tobacco control programs. Some states also told us that they viewed the settlement payments as an opportunity to fund needs that they were not able to fund previously due to the costs of health care. States’ other priorities and mandates included education, infrastructure projects and funding budget reserves to be saved for future needs. As a result, the states’ total allocations fund a variety of programs. Figure 1 shows the major categories of states’ use of MSA payments. Our analysis of states’ use of MSA payments shows that during fiscal years 2000 and 2001 states allocated seven percent of their payments to tobacco control efforts and another six percent for tobacco growers and economic development projects. The single largest category of funding was for health related purposes. Other major areas of funding included education and social services, infrastructure and general purposes including budget reserves. Finally, a substantial amount of the MSA payments was not allocated during the two fiscal years. States reported on a total of $11.6 billion in estimated MSA payments for fiscal year 2000 and 2001. (See appendix I for definitions of the allocation categories and a description of our methodology.) Table 5 shows the percentage of each state’s individual allocation to these categories. According to our analysis, in fiscal years 2000 and 2001, 36 states allocated $790 million of their MSA payments to tobacco control programs. The goal of these programs is to reduce tobacco use through various intervention strategies including promoting smoking cessation and preventing youth from starting to smoke. The amounts of the state allocations to these programs varied widely. In approximately one-third of the states, the development of a strategic plan for tobacco control is now required. In allocating MSA payments for tobacco control programs, most states applied guidelines established by the Centers for Disease Control and Prevention (CDC) to some extent. Tobacco control is one area where looking only at MSA payments can be misleading. While all of the 42 states which were decided on the use of the payments now provide state funding for tobacco control programs, two of these states, Arizona and California, fund these programs through state cigarette excise taxes rather than through their MSA payments. Sixteen other states reported that they provided state funding for tobacco control prior to the MSA. Further, although over one-quarter of the states with decisions on MSA payments allocated at least 10 percent of their MSA payments to tobacco control, most of these states had spent little or nothing on tobacco control programs prior to the settlement. Some states allocated payments for tobacco control but did not specify the amount for these programs. Table 6 summarizes the percentage of the states’ allocation of their MSA payments to tobacco control programs. For the most part, the states that dedicated larger percentages of their MSA payments to tobacco control were states that spent little or nothing on such programs prior to the settlement. The MSA provided 24 states that reported they had not provided any state funds for tobacco control prior to the agreement the opportunity to initiate funding for these programs. Fourteen states said that the MSA payments have allowed them to develop and implement more comprehensive tobacco control programs. (See Table 6 for these 14 states.) Ten states dedicated over 10 percent of their MSA payments to tobacco control. Of these states, only Washington had dedicated state funds to tobacco control prior to the MSA. Three of these states, Hawaii, Ohio, and Virginia have established foundations to develop new tobacco control programs. Wyoming has dedicated its settlement payments to an endowment fund and all of the interest in fiscal years 2000 and 2001 was allocated to tobacco control. Washington, New York and Maryland are examples of states that used tobacco settlement payments to significantly expand existing programs. Washington allocated over 33 percent of its MSA payments to create a new $100 million trust fund dedicated to prevent and reduce tobacco use by youth; it had previously provided less than $1 million for enforcement activities. Maryland provided $18.1 million, or 5.5 percent, of its settlement payments in fiscal year 2001 to fund a comprehensive tobacco control program and plans to meet the CDC guidelines in the future. Prior to the settlement, Maryland had allocated approximately $1.8 million in state funds for its tobacco control program. New York allocated $30 million or 4.5 percent of its MSA payments in fiscal year 2001 to expand its tobacco control program which was previously funded with $2.5 million in state funds. New York also nearly doubled its cigarette excise tax to $1.11 from 56 cents a pack with the proceeds of the tax increase designated for expansion of the state’s health insurance and tobacco control programs. A CDC report entitled Best Practices for Comprehensive Tobacco Control Programs sets out nine essential elements for a comprehensive program and provides CDC’s recommendations for an appropriate level of funding for each component based on specific characteristics of each state. Budget officials in 35 of the 46 MSA states told us that their state considered the CDC guidelines in determining how to allocate settlement funds. In another four states in which budget officials said that their state did not apply the guidelines, the pre-existing tobacco control programs have been cited as model programs by the CDC (Arizona, California, Massachusetts, and Oregon). The CDC reports that six states in our study (Arizona, Indiana, Maine, Massachusetts, Ohio, and Vermont) are meeting or exceeding the lower estimate of their recommended funding range by combining state and federal resources and private grants. In addition, Hawaii (at 98 percent) came close to meeting the Best Practices lower funding recommendations. Of the states with model tobacco control programs, Arizona, California, and Oregon did not supplement their programs with allocations from the MSA payments. Officials in Arizona said that the state already spent $37.3 million from tobacco excise tax revenues in fiscal year 2001. California was the first state to establish a comprehensive tobacco control program funded by tobacco excise taxes in 1989 and the excise tax provided $114.6 million for tobacco control in fiscal year 2001. Oregon spends approximately $8.5 million annually for tobacco control, and the governor has also proposed that part of the settlement be used to expand tobacco control programs. Massachusetts did provide additional funding for tobacco control and allocated a total of $31 million in MSA payments in fiscal years 2000 and 2001 to supplement its program, bringing the total annual allocation to $63.3 million in fiscal year 2001. Thirty-five states allocated a portion of their MSA payments for health- related purposes not specifically related to tobacco control for a total of nearly $4.8 billion in fiscal years 2000 and 2001. These allocations include funding for Medicaid and SCHIP, mental health, substance abuse, public health, medical research, medical technology, and long-term care. The extent to which these states allocated MSA payments for health purposes varied considerably: 16 states allocated more than half of their payments to health care and in several of these states health allocations composed more than 90 percent of the state’s total MSA payments. California was the only state that allocated all of its payments to health programs in fiscal years 2000 and 2001. (See Table 5 for the share of each state’s allocation to health care.) Eighteen states reported that they have used MSA payments to increase enrollment in existing health insurance programs for low-income individuals, usually through Medicaid or SCHIP. In addition, several of these states have allocated their MSA payments to implement SCHIP for the first time (e.g., Hawaii, Montana, and Utah). Other states allocated payments for Medicaid and SCHIP but used these amounts for purposes other than expanding health insurance coverage, such as increasing services for existing beneficiaries, increasing reimbursement rates to providers, and providing prescription drug coverage for senior citizens. Arizona, California and New York are examples of states that used MSA payments to significantly expand state health care programs. In all three states the health care expansion is expected to cost more than the state’s total MSA payments, and the state plans to use other funding sources to fully fund the programs. In Arizona the voter referendum dedicated all of the state’s MSA payments to a large expansion of the Arizona Health Care Cost Containment System (AHCCCS)—Arizona’s Medicaid program. Beginning in fiscal year 2002, eligibility for AHCCCS will be expanded to all people with incomes below 100 percent of the federal poverty level, increasing access for as many as 380,000 people. This expansion is expected eventually to cost as much as $140 million per year. California used all of its MSA payments, a total of $900 million, to expand the state’s public health insurance programs. This expansion will encompass several programs and include services for all individuals eligible for SCHIP, enhanced Medicaid coverage for working families, and increased payment rates for providers who participate in the state’s public health insurance programs, including Medicaid. Similarly, New York enacted a new Health Care Reform Act (HCRA 2000) and dedicated $388 million to create a new, comprehensive program for the uninsured. This program, called “Healthy New York,” which will eventually receive 70 percent of the state’s annual MSA payments, also encompasses several initiatives including expansion of SCHIP to include parents of children already covered by the program; increases in Medicaid eligibility to include families with incomes below 150 percent of the federal poverty level and individuals with incomes below 100 percent of the federal poverty level; and health insurance subsidies for certain individuals, families, and small businesses. Seven of the 13 tobacco states allocated $651 million of their MSA payments for assistance to tobacco growers and/or economic development projects. Because most tobacco farming and manufacturing jobs are concentrated in regions in just a few states, declines in tobacco consumption could result in job losses in all sectors of the economy of these areas. To help mitigate these economic consequences, these states allocated a total of 14 percent of their MSA payments to fund projects aimed at stabilizing the economy of the tobacco regions within the state and 7 percent for direct payments to tobacco growers. North Carolina, Kentucky and Virginia, which produce 74 percent of the country’s tobacco crop, allocated MSA payments for both of these purposes. Of the six tobacco states that did not allocate payments for either of these purposes, Indiana and West Virginia produce a relatively small share of the country’s tobacco, South Carolina plans to allocate payments for these purposes in the future, and the remaining three states had either not received MSA payments or not made a decision on the use of their payments. Table 7 shows the percentage of each state’s MSA payments allocated for each of these purposes in fiscal years 2000 and 2001. Six tobacco states allocated MSA payments for economic development projects, mostly in the tobacco regions of these states, in order to ease the burden of declining tobacco production. North Carolina and Kentucky, the two largest producers of tobacco, each allocated substantial amounts of their MSA payments for economic development, whereas Ohio and Alabama, which produce a much smaller amount of tobacco, allocated a relatively small percentage of their payments for this purpose. The tobacco states have taken different approaches to assisting the regions that will be most affected by declines in tobacco consumption. Several tobacco states used MSA payments to offer educational assistance such as scholarships to community colleges and job training for tobacco growers to help them transition to other careers. Several states also funded research projects to identify new uses for tobacco or other cash crops that farmers could grow instead of tobacco. In addition, several states used MSA payments to provide economic incentives to help develop the economy of rural tobacco regions. For example, Alabama securitized a portion of the MSA payments to finance economic development projects including construction of an automobile manufacturing plant. While some initiatives focused on tobacco regions, some were broader. Some states used MSA payments for statewide agricultural priorities that affect tobacco growers indirectly. Georgia, for example, used payments for rural sewer and water projects. Kentucky and North Carolina both allocated substantial amounts for economic development in tobacco regions. Kentucky established the Agricultural Development Fund which received 35 percent, or $87 million, of the state’s MSA payments. Kentucky plans to provide a variety of economic assistance programs to the state’s agricultural community, including programs that will provide business development and technical assistance to farmers and distribute funds for farm diversification, cooperative development, marketing, and new product development. North Carolina allocated 50 percent of its MSA payments, a total of $168 million, to projects directed at areas whose economy is dependent upon tobacco production. Specifically, the state created the Golden LEAF (Long-term Economic Advancement Foundation) to provide economic assistance to tobacco-dependent regions of North Carolina. The Golden LEAF will fund a range of programs including education, job training and employment, scientific research to develop new uses for tobacco or alternative cash crops, and recruitment of new industries to rural areas of the state. In December 2000, the foundation awarded $5 million in grant funds for 39 projects. Four tobacco states allocated MSA payments for direct payments to tobacco growers. Maryland is the only state that offered to pay farmers specifically to stop growing tobacco; Kentucky and Virginia provided subsidies or direct payments to tobacco farmers with no strings attached. North Carolina has not yet allocated specific amounts for direct payments, but its program will not require farmers to cease or reduce tobacco production. Maryland and Virginia provide an illustration of two states with different levels of tobacco production and different approaches to using their MSA payments for assistance to tobacco farmers. Maryland convened a special task force that developed a long-term plan with two main components: a tobacco buyout and a tobacco transition program. Both of these programs are designed to encourage farmers to cease tobacco production but to remain in the agriculture business. Only the buyout program was operational in fiscal years 2000 and 2001, and Maryland allocated a total of $11.5 million for this program. Payments will be based on the growers’ recent tobacco production and participants will receive payments based on this level of production for a period of ten years to ease the transition to other crops. The state’s program requires participants to agree both to permanently cease production of tobacco for cigarettes and other personal consumption, and to keep the land in agricultural production for ten years. The property must also carry a deed prohibiting, in perpetuity, the production of tobacco for cigarettes and personal consumption. In contrast, Virginia focused on compensation rather than reducing production. Virginia allocated 35 percent of its MSA payments, a total of $102 million, for direct payments to tobacco growers. These subsidies are not designed to encourage growers to end tobacco production but are intended to compensate tobacco growers for their business losses such as investments in specialized tobacco equipment and lost production opportunities associated with declines in the demand for tobacco. State budget officials said that they used MSA payments to fund other needs and priorities in addition to tobacco control, health care, and assistance to tobacco farmers and communities. For example, education and infrastructure were areas of long-term need that required additional funding that had not been available in some states prior to the MSA. In other cases, states did not make decisions on the use of all of their MSA payments during the period of our study. In fiscal years 2000 and 2001, states left 20 percent of their total MSA payments unallocated and allocated another 26 percent for other priorities such as education and social services, infrastructure projects, and general purposes including budget reserves, attorneys fees and amounts not earmarked for any specific purpose. (See Table 5 for each state’s allocations to each of these categories.) States allocated over $1 billion of their MSA payments to education and social services including programs for children and senior citizens. Of this amount, 12 states allocated $848 million in MSA payments to education. This category included allocations for preschool and daycare programs, elementary and secondary education (grades kindergarten through 12) and higher education. Louisiana and Maine allocated MSA payments to preschool and daycare programs, such as Head Start. Nine states (Colorado, Connecticut, Kentucky, Louisiana, Maryland, Montana, New Hampshire, North Dakota, and Ohio) allocated funds to local districts for a range of purposes including upgrading technology, increasing teachers’ salaries, enhancing teacher training and augmenting special education programs. Seven states (Connecticut, Kentucky, Louisiana, Maryland, Michigan, Nevada, and Ohio) allocated funds for higher education programs at colleges, universities, and community colleges and some of these allocations included funding for new college scholarship programs. In the area of elementary and secondary education, New Hampshire allocated 96 percent of its MSA payments, $92 million, to elementary and secondary education in order to comply with a state court decision on funding of the state’s public schools. In 1997, the New Hampshire Supreme Court ruled that the state’s reliance on local property taxes to fund nearly 90 percent of the cost of public education placed a disproportionate burden on residents in districts with low property values. Prior to the MSA, the state had attempted to address the court decision by increasing statewide property taxes, but the court subsequently ruled that the plan to phase in the property tax increase in certain districts with higher property values was unconstitutional. As a result, New Hampshire relied on MSA payments as a source of additional funding for local school districts. Michigan focused on higher education and created a program that will allocate 75 percent of its MSA payments beginning in fiscal year 2002 to provide college scholarships for high school students who achieve certain scores on statewide examinations. Officials told us that this program was a long-time priority for Michigan’s Governor, but prior to the MSA payments the state did not have sufficient resources available to fund the program. Students received grants for the first time in fall 2000, totaling $60 million. Under the program, high school juniors and seniors who pass an assessment test may receive a one-time $2,500 grant to pay for college. Also, students currently in grades 7 and 8 who pass the test may receive a $500 grant when they go to college in addition to the $2,500 grant. Students have up to seven years from the time they graduate to claim their grants. Kansas (28 percent) and Alabama (45 percent) each allocated a substantial portion of their MSA payments to children’s programs. These states funded programs for children in a variety of areas, including health and education, for services such as immunizations, after-school activities, mentoring efforts, and research, but they did not specify the precise amounts allocated to each of these areas. Kansas established the Kansas Endowment for Youth (KEY) Fund which will be invested to provide a permanent source of funding for children’s programs. In fiscal years 2000 and 2001, the state allocated a total of $55 million from this fund for at-risk youth, prenatal care, parent education, pediatric biomedical research, and school violence prevention. Beginning in fiscal year 2001, Kansas will direct all of its MSA payments to KEY and a set percentage of the fund will be allocated for children’s programs each year . Similarly, Alabama allocated over $100 million of its MSA payments to its Children First Trust Fund. According to a state official, the Governor and legislature felt there was a need for new programs serving children and adolescents, but because Alabama earmarks nearly all of its revenue, little funding was available for new programs. The MSA payments provided the state with a new funding source. Alabama’s trust fund was used to pay for programs including school safety, foster care, juvenile justice, teen pregnancy, literacy, and drug and alcohol abuse. Ten states allocated $294 million for physical infrastructure purposes. States dedicated MSA payments to four types of physical infrastructure: health care, long-term care and retirement facilities, education facilities, water and transportation projects, and municipal and state buildings. Arizona, Arkansas, Colorado, Indiana, and Massachusetts allocated payments to construction and renovation of health facilities such as hospitals, medical research facilities, home health centers, and retirement facilities for veterans. In addition, Arkansas, New Jersey, and Ohio allocated payments for constructing, upgrading, and/or remodeling schools and universities. Louisiana and North Dakota allocated MSA payments for transportation and water projects. Finally, Illinois and Louisiana used payments to improve municipal and state buildings. Both North Dakota and Ohio are examples of states that plan to allocate millions annually to infrastructure projects. North Dakota enacted legislation placing 45 percent of the state’s annual MSA payments in a water management trust fund dedicated for projects related to the state’s long- term water development and management needs. Also, the fund will be used to repay bonds the state issued to finance several flood control projects, the Southwest Pipeline project, and a lake outlet project. Ohio used 18 percent of its allocations, or $138 million, for school construction which has been a recent priority in Ohio. The state created two dedicated funds—an endowment to provide a permanent source of revenue for capital projects for education and a trust fund to begin funding construction and renovation projects for elementary and secondary schools. Only Connecticut and Illinois used MSA payments explicitly to fund tax reductions, but the total amounts they allocated for this purpose were large 4 percent of the total MSA allocations for all states. Connecticut used a total of 38 percent, or $50 million per year, of its MSA payments for property tax reductions. Illinois used 50 percent of its MSA payments, $316 million, for an earned income tax credit and a one-time property tax reduction. For both states, these were part of a series of recent tax reductions. States allocated $1.2 billion for budget reserves and other general purposes. Of this amount, $602.8 million was allocated for state budget reserves or rainy day funds, which act as state savings accounts, allowing states to save for a future economic downturn or emergency. Nine states (Delaware, Hawaii, Illinois, Louisiana, Montana, New Jersey, New Mexico, New York, Oklahoma) allocated MSA payments to reserves. Budget officials in five of these states told us that their state made one-time deposits to a rainy day or reserve fund and does not plan to allocate further payments for this purpose. New York made a one-time allocation of 37 percent of its MSA payments to the state’s Debt Reduction Reserve Fund. Hawaii and New Mexico took unique approaches to making allocations to budget reserves. Hawaii plans to allocate 40 percent of its MSA payments each year to a new rainy day fund that was established as a result of MSA payments; prior to the settlement, the state did not have a rainy day fund. New Mexico created a special long-term reserve fund that is distinct from a rainy day fund. New Mexico devotes 50 percent of its MSA payments to a special “permanent fund,” which is intended to be a long-term savings fund for the benefit of future generations. New legislation would be required to access this fund. New Mexico had other permanent funds with assets totaling more than $12 billion. Sixteen states allocated $623 million of the MSA payments for other general purposes. This category includes allocations to the state’s general fund— not earmarked for any particular purpose—and some allocations for other specific purposes such as attorneys’ fees. In most cases, if MSA payments were deposited into the general fund, states could not tell us the purposes for which the payments were used. Iowa, Kansas, Oklahoma, and Wisconsin made one-time transfers to their general funds, and some of these deposits were a substantial portion of the states’ MSA payments. For example Kansas made a one-time transfer of $70 million, or 56 percent, to cover revenue shortfalls. Other states decided to allocate set amounts annually to their general fund and to make decisions about the use of these payments on a year by year basis. For example, Virginia allocated 40 percent of its MSA payments each year—over $115 million in fiscal years 2000 and 2001—to its general fund. Rhode Island allocated all of its MSA payments, $100 million in fiscal years 2000 and 2001, to its general fund, and the state plans to continue this practice in the future. Some states’ allocations for general purposes reflected payments for attorneys who worked on tobacco lawsuits; in most cases, these amounts represented a relatively small percentage of MSA allocations. Maryland is unusual among these states in that it has reserved 25 percent of all MSA payments pending resolution of a dispute over attorneys’ fees. State officials told us that prior to the MSA, Maryland entered into a contract with a private attorney for a fee equal to 25 percent of the state’s share of the settlement. Because the MSA provides for payment of attorneys’ fees, this agreement has been contested and the funds have been set aside until the case is resolved. More than $2 billion of the MSA payments in fiscal years 2000 and 2001 remained unallocated as of April 2001. The 15 states with unallocated funds cited different reasons. In some states there is a year lag between the time the state receives the MSA payments and the time it allocates them for specific purposes. These states followed a practice of allocating only the MSA payments received in the previous fiscal year. In other states, a portion of the MSA payments remained unallocated after the appropriations process, leaving these amounts available for appropriation in future years pending decisions by each state’s governor and legislature. In Hawaii state law established a ceiling on the amount of MSA payments available for use; as a result, only a portion of the dollars could be distributed to specific funds during fiscal years 2000 and 2001. Since the state’s total MSA payments exceeded the limit, nearly $34 million in unallocated MSA payments will be available for appropriation in the future. Idaho, South Dakota, and Utah decided to distribute large portions of their settlement payments to endowment funds not designated for any particular purpose. South Dakota created a People’s Trust Fund into which all of the state’s MSA payments are deposited. The legislation creating the People’s Trust Fund did not dedicate the fund for any particular purpose, but only the interest is available to be spent. Similarly, Idaho enacted legislation requiring that all MSA payments be deposited into the Millennium Trust Fund, which is invested but does not have any specified purpose. Each year, the earnings on the fund may be appropriated without restrictions. This endowment fund is simply intended to provide a continuous source of funding for state programs. Three states—Oregon, Pennsylvania, and Tennessee—had not made final decisions about the allocation of their MSA payments as of April 2001. In addition, Missouri had not received any MSA payments because it did not reach state-specific finality until late April 2001. MSA payments have also been used to back bonds, which is known as “securitization.” Securitization is a type of structured financing based on the cash flow of receivables or rights to future payments. Securitization structures are different from traditional public finance and are sold differently from traditional municipal bonds. In the process of securitizing, state and local governments sell their tobacco settlement revenue stream to a special purpose entity (SPE) established for the purpose of issuing bonds backed by these funds and paying the debt service on the bonds. The SPE is designed to be legally separate and “bankruptcy remote” from the government entity. This means that the credit rating for these bonds is separate from the state or local government’s rating and is based on the credit worthiness of the tobacco industry and the structure of the financing. The government entity does not bear financial responsibility for the bonds, and the purchasers of the bonds bear any risk that the bonds will not be repaid. The interest paid on the bonds issued through securitizing the MSA payments may be either subject to federal and state income taxes or exempt from such taxes, depending on a number of factors including the intended use of the proceeds. Securitization allows states to receive funds up front rather than over time as MSA payments are made according to the terms of the agreement. States have securitized to finance one-time expenses such as capital projects, paying down existing state or local debt, or establishing an endowment with a large initial amount. States have considered their overall needs in deciding whether to securitize the tobacco settlement revenues. Three states—Alabama, Alaska, and South Carolina—and many counties in New York state have already securitized a portion of the expected revenue stream, and ten additional states told us that securitization was under consideration. Budget officials in other states said that their states have rejected the option of securitizing these assets but that securitization may be considered again in the future. Alabama and South Carolina, two tobacco states, securitized a portion of their MSA payments to finance economic development projects. Alabama was the first state to securitize MSA payments through an SPE in September 2000. The Alabama 21st Century Authority was created to issue bonds for the purpose of promoting economic and industrial development, and it issued $50 million of tax-exempt bonds to fund an automobile manufacturing plant. In South Carolina, the Tobacco Settlement Revenue Management Authority was created to issue bonds to establish four dedicated funds for specific purposes. Two of the funds, which will receive 25 percent of the proceeds of the bond issue, will be used to provide economic assistance. One will be used primarily to develop the state’s water and wastewater infrastructure and one will be used to compensate individuals for losses in tobacco production. In addition, 73 percent of the proceeds of the bond issue in South Carolina will be used to fund a variety of health care programs. South Carolina’s bond issue is the largest securitization of MSA payments to date and the first to issue taxable bonds for a portion of the transaction. New York City was the first locality to securitize MSA payments followed by several of the largest counties in New York State including Erie, Monroe, Nassau and Westchester. In addition, 17 counties in New York participated in a pooled transaction, and additional counties plan to participate in a future pooled transaction. All but two of these counties (Nassau and Westchester) have used the proceeds of the securitization to pay down their debt. For these counties, reducing their total debt has in turn allowed them to improve their individual credit ratings. Westchester County did not issue bonds to pay down existing debt but rather decided on a one-time securitization transaction to pay off its ten-year transitional obligation to subsidize the county medical center. New York City established the Tobacco Settlement Asset Securitization Corporation (TSASC) which issued bonds to finance capital projects including school construction. The City has been constrained by the state’s constitutional debt limit for some time and has capital needs that are greater than the debt limit allows. Selling the MSA payment stream to TSASC and issuing bonds for a portion of the future payments allowed the City to proceed with its capital program. To ensure that MSA payments were used to expand or establish new programs, 16 states enacted legislation including a requirement that MSA payments be used to supplement rather than to replace or supplant existing state funding. The restrictions on supplantation are intended to help ensure that existing state funding will not be reduced and that MSA payments will increase the total amount of funding for selected programs. These restrictions apply to the portion of the state’s MSA payments that are deposited in dedicated funds established by states. The majority of these provisions apply to funds earmarked for health care and tobacco control programs. In a few states, these provisions apply to other uses such as education, social services, and agriculture. For example, in Maryland the provision applies to all MSA payments that are earmarked for three purposes—smoking cessation, health, and agriculture. In Louisiana the legislation requires that MSA payments allocated for education be used to supplement rather than replace existing state funding. (See appendix VII for a summary of the states’ restrictions against supplantation.) While the remaining states did not enact specific provisions, budget officials in 15 of these states reported to us that it was their policy not to supplant pre- existing funding with MSA payments. During state fiscal years 2000 and 2001, most states allocated at least some portion of their MSA payments for tobacco control and health care while also considering other state budget needs. Many tobacco states responded to the demands for assistance to tobacco growers and economic development by providing funding in those areas. Other needs such as education, infrastructure and budget stabilization were also priorities in several states, and a large portion of the MSA payments was not allocated in the two fiscal years of our study. Consistent with the long-term nature of the MSA payments, states developed plans for the payments including enacting laws and establishing dedicated funds earmarking their future use. Although these plans are intended for the long term, they may be affected by fluctuations in state budget conditions. When the states first began receiving and planning for the use of their MSA payments for fiscal years 2000 and 2001, they were budgeting during a period of projected surpluses. Most states had the budgetary resources to fund mandated needs from other state revenues to allow them to dedicate the settlement payments for expansions in health care, tobacco control, and other new projects. As the forecasts for state budgets begin to change, states may be faced with more difficult choices in determining the uses of their MSA payments for the near future. The earmarking of the payment stream may have the effect of subsidizing state programs if states reduce their own funding in these areas. States that included provisions against supplantation when they created dedicated funds for the MSA payments, or established endowment funds that prevent the use of the principal, have developed some protection against using the payments to subsidize state programs. States’ future decisions over the use of the MSA payments will likely require balancing state-specific priorities and needs within the context of overall budget conditions. As agreed with your office, unless you release this report earlier, we will not distribute it until 30 days from the date of this letter. At that time, we will send copies to relevant congressional committees and subcommittees and other interested parties. We will also make copies available to others upon request. If you or your staff have any questions concerning this letter, please contact me at (202) 512-9573. Key contributors to this assignment were Thomas James, Amelia Shachoy, John Forrester, Carol Henn, Rosellen McCarthy, Brady Goldsmith, and Thomas Yatsco. This review focused on states’ use of payments received under the Master Settlement Agreement (MSA) for state fiscal years 2000 and 2001. We collected and analyzed budget-related and legislative documents and interviewed officials from the executive budget offices on the plans for use of the MSA payments in the 46 states that were a party to the MSA. In some cases, our discussions included officials from the state attorney general’s office, the governor’s office and the state agency responsible for tobacco control programs. We also reviewed previous GAO reports and other recent reports and studies, and we spoke with representatives from the organizations conducting these studies. We spoke with experts to obtain background information on specific issues covered in this report, such as the legal provisions of the MSA and securitization of MSA payments. We conducted our work from July 2000 through April 2001 in accordance with generally accepted government auditing standards. We obtained information on the states’ plans for MSA payments through state fiscal year 2001. We conducted our work and collected information for fiscal year 2001 while the fiscal year was in progress and states were at various stages in the process of planning for the use of their payments. Because we completed our fieldwork in April, we did not obtain final information for the fiscal year, which for most states ends on June 30. In order to present as comprehensive a review as possible, we report on the total amounts planned for by states even if final decisions were not made or all amounts were not appropriated by the legislature. We refer to these total amounts planned for and reported by states as “allocations.” State allocations for fiscal years 2000 and 2001 totaled $11.6 billion. While we did gather budget documentation on states’ plans, we did not verify the accuracy of the data reported by states. For informational purposes, we also obtained data on actual MSA payments made by the tobacco companies to states, which totaled $13.5 billion through April 2001. Most states developed plans and allocated dollars based on estimated payments for the fiscal year. Because the payments made by the tobacco companies are subject to adjustments that are not determined until the payments are made, actual payments received by the states differed from estimated payment amounts and from the states’ allocations of $11.6 billion. The major difference between the $13.5 billion in payments received and the $11.6 billion in states’ allocations is the payments to the counties in California and New York. These payments were reported in the total payments to those states but were not included in the total allocations for those states. Our study tracked only the states’ use of MSA payments and not the allocations of counties’ share of the payments. To standardize the information reported by the 46 states, we developed categories for the program areas to which states allocated their MSA payments. (See the definitions of these categories below.) We used states’ descriptions of their programs to categorize the $11.6 billion in allocations according to these definitions. In cases where no final decision had been made on the allocation of the payments, we reported these amounts in the “unallocated” category. In cases where the total amount had not been appropriated by the legislature but the funds had been earmarked for a particular purpose (e.g., health), we reported the allocation amounts in the category for which they had been earmarked. We used this method to categorize all allocations including those to dedicated funds and states’ general funds. Except where noted in examples of individual state’s allocations, for the purposes of our analysis, we combined the states’ allocations for fiscal years 2000 and 2001 and reported on the total for the two-year period. Economic Development for Tobacco Regions: This category comprises amounts allocated for economic development projects in tobacco states such as infrastructure projects, education and job training programs, and research on alternative uses of tobacco and alternative crops. This category includes projects specifically designed to benefit tobacco growers as well as economic development that may serve a larger population within a tobacco state. Education: This category comprises amounts allocated for education programs such as day care, preschool, Head Start, early childhood education, elementary and secondary education, after-school programs, and higher education. General Purposes: This category comprises amounts allocated for attorneys fees and other items, such as law enforcement community development, that could not be placed in a more precise category. This category also includes allocations to the state’s general fund that were not earmarked for any particular purpose. Health: This category comprises amounts allocated for direct health care services, health insurance including Medicaid and the State Children’s Health Insurance Program (SCHIP), hospitals, medical technology, public health services, and health research. Infrastructure: This category comprises amounts allocated for capital projects such as construction and renovation of health care, education and social services facilities, water and transportation projects, and municipal and state government buildings. Social Services: This category comprises amounts allocated for social services such as programs for the aging, assisted living, Meals on Wheels, drug courts, child welfare and foster care. This category also includes allocations to special funds established for children’s programs. Payments to Tobacco Growers: This category comprises amounts allocated for direct payments to tobacco growers including subsidies and crop conversion programs. Reserves/Rainy Day Funds: This category comprises amounts allocated to state budget reserves such as rainy day and budget stabilization funds not earmarked for specific programs. Allocations to reserves that are earmarked for specific areas are categorized under those areas (e.g., health). Tax Reductions: This category comprises amounts allocated for tax reductions such as property tax rebates and earned income tax credits. Tobacco Control: This category comprises of amounts allocated for tobacco control programs such as prevention, including youth education, enforcement and cessation services. Unallocated: This category comprises amounts not allocated for any specific purpose, such as amounts allocated to dedicated funds that have no specified purpose; amounts states chose not to allocate in the year MSA payments were received that will be available for allocation in a subsequent fiscal year; unallocated interest earned from dedicated funds; and amounts that have not been allocated because the state had not made a decision on the use of the MSA payments. Total payments in California are allocated 50 percent to the state and 50 percent to local governments. The 58 counties receive 90 percent of the local share, to be distributed based on population, and the remaining 10 percent is split equally among four cities: Los Angeles, San Diego, San Francisco, and San Jose. Total payments in New York are allocated 51 percent to the state and 49 percent to the 57 counties and New York City. Allocation to the counties is based on the county share of Medicaid costs and population along with some specific considerations for individual counties. The Strategic Contribution Fund payments (made from 2008 through 2017) are intended to reflect the level of the contribution each state made toward final resolution of the state lawsuits against the tobacco companies and will be allocated to states based on a separate formula developed by a panel of former state attorneys general. Subtotal (All 46 states) The Master Settlement Agreement (MSA) required the tobacco companies to meet with the political leadership of states with grower communities to address the economic concerns of these communities. The National Tobacco Grower Settlement Trust Agreement, referred to as Phase II, resulted from that requirement and is intended to compensate tobacco growers and quota owners for potential reductions in their tobacco production and sales resulting from the MSA. The Phase II agreement was reached in July 1999 between the four major tobacco companies and the 14 states that produce and manufacture tobacco used for cigarettes. The agreement includes the 13 tobacco states that are a party to the MSA and the state of Florida, which reached an earlier, independent settlement with the tobacco industry. Tobacco production has remained principally in the southeastern states. Because most tobacco farming and manufacturing jobs are concentrated in this region, any declines in tobacco consumption could result in job losses in all sectors of the economy of this area. The Phase II agreement was intended to help mitigate any such consequences. The Phase II agreement requires the tobacco companies to make payments to the National Tobacco Grower Settlement Trust each year for a period of 12 years beginning in 1999 and continuing through 2010. The trust is administered by a trustee and payments are distributed from the trust directly to tobacco growers and quota owners in the states that are a party to the agreement. Each state’s growers and quota owners receive a fixed percentage of the payments from the trust. This percentage was calculated either on the basis of the 1998 basic quota for production of cigarette tobacco or, in states where no quota existed, 1998 production of tobacco for cigarettes. Three states—Kentucky, North Carolina, and Tennessee— are the largest producers of cigarette tobacco in the country, and growers and quota owners in those states receive over 75 percent of the Phase II payments. Table 8 identifies the percentage of the Phase II payments allocated to each state’s growers and quota owners. Each state through its “Certification Entity” was required to develop a plan identifying the tobacco growers and quota owners within the state and a methodology for distributing payments. The Phase II states are categorized as either Class A (Georgia, Kentucky, North Carolina, South Carolina, Tennessee and Virginia) or Class B (Alabama, Florida, Indiana, Maryland, Missouri, Ohio, Pennsylvania and West Virginia) based on the amount of tobacco produced in the state. In Class A states, the Certification Entity comprises a Board of Directors with the following membership: the governor (Chairman), the state commissioner of agriculture (Vice- Chairman), the state attorney general (Secretary), one member each from the state Senate and House of Representatives, not less than three and no more than six citizens of the state who are tobacco growers or quota owners in the state, one citizen with a distinguished record of public service, and two members of the state congressional delegation. In Class B states, the Certification Entity comprises the governor, state attorney general, and the state commissioner of agriculture. Each state’s plans may be revised on an annual basis; plans are due to the trustee by June 1 of each year in 2000 through 2010. The three largest tobacco states—North Carolina, Kentucky and Tennessee—each developed somewhat different methodologies for distributing payments in 1999 and 2000. In North Carolina, growers and quota owners each received 50 percent of the payments distributed within the state. Kentucky used the following methodology to distribute payments: one-third of the total distributions to quota owners, one-third to the owners of the land used to grow tobacco, and one-third to the farmers who produced the crop. In Tennessee, growers received 80 percent of the payments and quota owners received 20 percent. All three states distributed payments based on the prior year’s tobacco crop. 1. Alabama 21st Century Fund is funded with tobacco settlement revenues. Funds are transferred from this fund to other funds including the general fund from which 50 percent is to be appropriated to the Alabama Medicaid Agency with a portion to the Medicaid Waiver Program at the Commission on Aging. “Sufficient safeguards shall be implemented to ensure that these new monies will increase and not supplant or decrease existing state support.” 2. Alabama Senior Services Trust Fund is funded with tobacco settlement revenues. “Any funds appropriated pursuant to this section shall be additional funds distributed to the Alabama Department of Senior Services or its successor and shall not be used to supplant or decrease existing state or local support to the Alabama Department of Senior Services or its successor. Appropriations from the trust fund shall be used to both expand existing services and create new services for Alabama’s elderly.” 3. Children First Trust Fund is funded with tobacco settlement revenues and revenues received from other sources. Funds are transferred to children’s services provided by several state agencies. “Twenty-one percent of the fund shall be allocated to the State Board of Education. Sufficient safeguards shall be implemented to ensure that the new monies will increase and not supplant or decrease existing state or local support.” “Twenty percent of the funds shall be allocated to the Alabama Department of Human Resources. Sufficient safeguards shall be implemented to ensure that these new monies will increase and not supplant or decrease existing state and local support received from any source.” “Seventeen percent of the revenues shall be allocated to the Department of Youth Services. Sufficient safeguards shall be implemented to ensure that the new monies will increase and not supplant or decrease existing state or local support, except the portion of funds used year to year according to needs enumerated in this section.” The Initiative added an additional definition of eligibility for the Arizona Health Care Cost Containment System (the state’s health insurance program) and established the Arizona Tobacco Litigation Settlement Fund for receipt of all tobacco settlement revenues. “Monies in the fund shall be used to supplement and not supplant existing and future appropriations to the Arizona Health Care Cost Containment System.” Policy on use of tobacco settlement funds provides “The majority of the moneys received by the state from the Master Settlement Agreement shall be dedicated to improving the health of the citizens of Colorado, including tobacco use prevention, education, and cessation programs and related health programs. Such moneys are intended to supplement any moneys appropriated to health-related programs established prior to the effective date of this part 11.” and “A portion of the settlement monies shall be used to strengthen and enhance the health of all residents of Colorado by supplementing and expanding statewide and local public health programs.” Created the Tobacco and Health Trust Fund to support and encourage tobacco control and substance abuse programs, and to develop and implement programs to meet the unmet physical and mental health needs in the state. Trust fund receives transfers from the Tobacco Settlement Fund and may accept gifts and grants. “Recommended disbursements from the trust fund shall be in addition to any resources that would otherwise be appropriated by the state for such purposes and programs.” Created the Delaware Health Fund for receipt of all tobacco settlement revenues. “Expenditures from the Delaware Health Fund shall not be used to supplant any state expenditures appropriated in fiscal year 1999 for purposes consistent with those outlined in subsection (c) of this section.” Subsection (c) dedicates funds for the following purposes: expanding access to health care and health insurance for uninsured or under insured; long-term investments in health care infrastructure; tobacco control and substance abuse; testing for detection of costly illnesses; prescription drug program for low-income senior and disabled citizens; payment assistance for those with expenses of chronic illnesses; other expenditures for health-related purposes. Created three new funds including the tobacco prevention and control trust fund. “The Hawaii tobacco prevention and control trust fund may receive appropriations, contributions, grants, endowments, or gifts in cash or otherwise from any source, including the State, corporations or other businesses, foundations, government, individuals, and other interested parties; provided that any appropriations made by the State shall not supplant or diminish the funding of existing tobacco prevention and control programs or any health related programs funded in whole or in part by the State.” 1. An Act to Provide for the Creation of a Special Fund Known as the Alabama 21 st Century Fund, 1999 Ala. Act 99-353 § 19(a)(3), Ala. Code §§ 41-10-629, -638 (1999). 2. An Act to Create the Alabama Senior Services Trust Fund, 1999 Ala. Act 99-444 § 1(d), Ala. Code § 41-15C-1 3. An Act Relating to the Children First Trust Fund, 1999 Ala. Act 99-390 §§ 2-3, Ala. Code §§ 41-15B-2 - 15B-2.2 Ariz. Rev. Stat. §§ 36-2901.01 − 2901.02 (2000) (added by Prop. 204, approved Nov. 7, 2000). An Act Concerning Use of Moneys Received Pursuant to the Tobacco Litigation Settlement, 2000 Colo. Legis. Serv. ch. 154, § 1, Colo. Rev. Stat. § 24-75-1103 (2000). An Act Concerning Expenditures for the Programs and Services of the Department of Public Health, 2000 Conn. Acts 00-216, § 15(d)(1), Conn. Gen. Stat. § 4-28f (2000). An Act to Create the Delaware Health Fund, 72 Del. Laws, ch. 198, § 1 (1999), Del. Code Ann. tit. 16, § 137 (1999). An Act Relating to the Hawaii Tobacco Settlement Special Fund, 1999 Haw. Sess. Laws Act 304, § 2, Haw. Rev. Stat. § 328L-5 (1999). Created the tobacco master settlement fund for receipt of all revenues and several additional funds to which funds are transferred. Several of these funds have non-supplant provisions: (1) Indiana Tobacco Use Prevention and Cessation Trust Fund requires funding proposals to state “the extent to which the expenditure will supplement or duplicate existing expenditures of other state agencies, public or private entities, or the executive board.” Other funds—(2) Indiana Health Care Trust Fund funds health programs including CHIP, cancer detection, local health departments and community centers; (3) Biomedical Technology and Basic Research Trust Fund; (4) Indiana Local Health Department Trust Fund; (5) Indiana Prescription Drug Fund—include the language: “Appropriations and distributions from the fund under this chapter are in addition to and not in place of other appropriations or distributions made for the same purpose.” The children’s trust, renamed the Kansas Endowment for Youth (KEY) fund, was established to receive all tobacco settlement funds. All moneys credited to the KEY fund must be invested to provide an ongoing source of investment earnings available for periodic transfer to the Children’s Initiatives Fund. “Moneys allocated or appropriated from the Children’s Initiatives Fund shall not be used to replace or substitute for moneys appropriated from the state general fund in the immediately preceding fiscal year.” Established the Millennium Trust Fund and the Louisiana Fund and creates the Education Excellence Fund as a special fund within the Millennium Trust Fund. “No amount appropriated as required in this paragraph shall displace, replace or supplant appropriations from the general fund for elementary and secondary education, including implementing the Minimum Foundation Program. This subparagraph shall mean that no appropriation for any fiscal year from the Education Excellence Fund shall be made for any purpose for which a general fund appropriation was made in the previous year unless the total appropriations for the fiscal year from the state general fund for such purpose exceed general fund appropriations of the previous year.” The Fund for a Healthy Maine was created for receipt of all tobacco settlement revenues. “When allocations are made to direct services, services to lower income consumers must have priority over services to higher income consumers. Allocations from the fund must be used to supplement, not supplant, appropriations from the General Fund.” Created the Cigarette Restitution Fund for all revenues received by the state resulting from the tobacco settlement. Expenditures from the fund shall be for tobacco control, cancer prevention, Maryland agricultural plan for alternative crop uses, Maryland Health Care Foundation, primary health care in rural areas, substance abuse, and any other public purpose. “Disbursements from the fund to programs funded by the state or with federal funds administered by the state shall be used solely to supplement, and not to supplant, funds otherwise available for the programs under federal or state law as provided in this section.” Massachusetts Established the Tobacco Settlement Fund to receive 30% of tobacco settlement payments received by the state and 30% of the earnings on the Health Care Security Trust as well as other sources of funding. “Amounts credited to said fund shall be expended, subject to appropriation, to supplement existing levels of funding for the purpose of funding health related services and programs including, but not limited to, services and programs intended to control or reduce the use of tobacco in the commonwealth. Amounts credited to said fund shall not be used to supplant or replace other health related on non health related expenditures or obligations of the commonwealth.” Constitutional amendment dedicated trust fund interest earnings for health care benefits, services or coverage and tobacco disease prevention and states “The trust’s interest and principal cannot be used to replace current funding for these programs.” Created the Fund for a Healthy Nevada for receipt of 50% of all tobacco settlement funds received by the state. Funds are to be allocated for pharmaceuticals for senior citizens, programs for independent living for senior citizens, tobacco control, health services for children and disabled. “Money expended from the fund for a healthy Nevada must not be used to supplant existing methods of funding that are available to public agencies.” An Act to Amend the Indiana Code Concerning State Offices and Administration, 2000 Ind. Leg. Serv. P.L. 21- 2000, §§ 2-6, Ind. Code §§ 4-12-4-13, -5-7, -6-5, -7-8, -8-3 (2000). An Act Concerning the Disposition of Certain Moneys for the Benefit of Children, 1999 Kan. Sess. Laws ch. 172, §§ 1-2, Kan. Stat. Ann. §§ 38-2101 - 2102 (1999). La. Const. art. VII, §§ 10.8-10.10 (added by 1999 La. Sess. Law Serv. Act 1392, § 1, approved Oct. 23, 1999). An Act to Make Supplemental Appropriations and Allocations for the Expenditures of State Government, 1999 Me. Legis Serv. ch. 401, § V-1, Me. Rev. Stat. Ann. tit. 22, § 1511 (1999). An Act Concerning the Cigarette Restitution Fund, 2000 Md. Laws ch. 18, § 1, Md. State Fin. & Proc. § 7-317 (2000). An Act Making Appropriations for Fiscal Year 2000, 1999 Mass. Legis. Serv. ch. 127, § 42, Mass. Gen. Laws Ann. ch. 29, § 2xx (1999). An Act Submitting to the Qualified Electors of Montana an Amendment to Article XII of the Montana Constitution, 2000 Mont. Laws Ballot Meas. 35 (approved Nov. 7, 2001). An Act Relating to State Financial Administration and Creating the Fund for a Healthy Nevada, 1999 Nev. Laws ch. 538, §§ 3-5, Nev. Rev. Stat. §§ 439.620 - 630 (1999). North Carolina Established the Health Trust account to receive 25% of the tobacco settlement revenues. Health and Wellness Trust Fund receives funds from the Health Trust Account to address health needs of vulnerable and underserved populations, and to fund programs including research, education, and treatment of health problems, to develop a comprehensive tobacco control plan. “It is the intent of the General Assembly that the funds provided pursuant to this Article to address the health needs of North Carolinians be used to supplement, not supplant, existing state funding of health and wellness programs.” Created two funds for receipt of tobacco settlement revenues. 50% of all revenues shall be deposited into the West Virginia Tobacco Settlement Fund and appropriated for the following purposes: the public employees insurance agency, public health programs, state health facilities. The legislation provides “funding for expansion of the federal-state Medicaid program as authorized by the legislature or mandated by the federal government.” State budget official said this language is intended to not supplant existing funds. An Act to Provide for the Creation of the Health and Wellness Trust Fund, 2000 N.C. Sess. Laws 2000-147, §§ 1- 2, N.C. Gen. Stat. §§ 143-16.4, 147-86.30 (2000). An Act Relating to Appropriations, Expenditure of Interest, and Authorization of Expenditures from Tobacco Settlement Funds, 1999 W. Va. Acts ch. 281, W. Va. Code §§ 4-11A-1 − 11A-3 (1999). Show Us the Money: An Update on the States’ Allocation of the Tobacco Settlement Dollars. A Report by the Campaign for Tobacco-Free Kids, American Cancer Society, American Heart Association and the American Lung Association, October 1, 2000. Centers for Disease Control and Prevention. Best Practices for Comprehensive Tobacco Control Programs—August 1999. Atlanta, GA: U.S. Department of Health and Human Services, Centers for Disease Control and Prevention, National Center for Chronic Disease Prevention and Health Promotion, Office on Smoking and Health, August 1999. Reprinted with corrections. Centers for Disease Control and Prevention. Investment in Tobacco Control—State Highlights 2001. Atlanta, GA: U.S. Department of Health and Human Services, Centers for Disease Control and Prevention, National Center for Chronic Disease Prevention and Health Promotion, Office on Smoking and Health, 2001. Congressional Research Service. Tobacco Master Settlement Agreement (1998): Overview, Implementation by States, and Congressional Issues. Washington, D.C.: November 1999. General Accounting Office. Tobacco: Issues Surrounding a National Tobacco Settlement. (GAO/RCED-98-110, April 15, 1998) General Accounting Office. Tobacco Settlements: States’ Use of Settlement Proceeds. (GAO/HEHS-98-147R, April 22, 1998). National Association of County and City Health Officials. Program and Funding Guidelines for Comprehensive Local Tobacco Control Programs, April 2000. National Conference of State Legislatures. State Allocation of Tobacco Settlement Funds: FY 2000 and FY 2001, August 1, 2000. National Governors Association. State Tobacco Plans—March 1, 2001, National Governors Association, Center for Best Practices, 2001. President’s Commission on Improving Economic Opportunity in Communities Dependent on Tobacco Production While Protecting Public Health. Tobacco at a Crossroad—A Call For Action: Final Report of the President’s Commission on Improving Economic Opportunity in Communities Dependent on Tobacco Production While Protecting Public Health. Washington, D.C.: U.S. Department of Agriculture, May 14, 2001. The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Orders by mail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Orders by visiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders by phone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: [email protected] 1-800-424-5454 (automated answering system) | The attorneys general of 46 states signed a settlement agreement in 1998 with the nation's largest tobacco companies. The agreement requires the tobacco companies to make annual payments to the states in perpetuity as reimbursement for past tobacco-related costs. Florida, Minnesota, Mississippi, and Texas reached earlier individual settlements with the tobacco companies. States are free to use the money for any purpose. This report examines (1) the amount of payments received by the states and the states' decision-making processes on the allocation of payments in fiscal years 2000 and 2001 and (2) the types of programs that states funded with their payments in those two fiscal years. As of April 2001, GAO found that 45 of the 46 states received nearly $13.5 billion of the $206 billion estimated to be paid by the tobacco companies during the first 25 years of the agreement. Many states established dedicated funds to receive at least part of the payments. Other states passed legislation to ensure that payments are used to supplement existing state funds, enacted laws governing the future use of the payments, established voter approved initiatives to decide how to allocate the payments, and created special commissions to develop recommendations and long-term plans for the payments. The types of programs that states tended to fund were tobacco control and health care. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The adverse impact that dropping out of school has on both those who drop out and society itself has long been recognized. Multiple studies have shown that dropouts earn less money and are more frequently unemployed than graduates. Dropouts are about three times as likely as high school completers who do not go on to college to be welfare recipients, and about 30 percent of federal and 40 percent of state prison inmates are high school dropouts thus imposing a considerable cost on all levels of government. Given the multiple adverse consequences associated with dropping out, lowering the dropout rate has long been a goal of educators and legislators. The 1968 amendments to the Elementary and Secondary Education Act of 1965 established local demonstration projects aimed at reducing the dropout rate. From 1969 through 1976, some 30 projects received $46 million in grants from the Department of Education (then the Office of Education) to develop and demonstrate educational practices that showed promise in reducing the numbers of youth who failed to complete their secondary education. The act was amended again in 1974, when funding for dropout prevention efforts was consolidated with funding for other programs, and states were given the discretion to decide what financial support dropout prevention projects would receive through state- administered consolidated grants. In 1988, the Congress created the SDDAP. The program consisted of competitive grants from Education to 89 school districts and community organizations. In fiscal years 1988-1995, SDDAP grantees received nearly $227 million in federal funds. Authorizations and appropriations for the program ended in fiscal year 1995. The School Dropout Assistance Act was passed in 1994 and authorized funding in fiscal years 1995 to 1999, but was never funded. Dropout prevention program funding was subsequently provided in fiscal year 2001 when Education’s Dropout Prevention Demonstration Program received appropriations of $5 million. Although federal funding for dropout prevention programs has been inconsistent, the National Dropout Prevention Center (NDPC) has existed for 15 years and is privately funded. Many of the program officials with whom we spoke said that NDPC was a resource on which they depended for information. This center is housed at Clemson University in South Carolina and offers various resources to those wishing to implement dropout prevention programs. For example, NDPC manages a database that provides program profiles, including contact information, for model programs located throughout the country. In addition, NDPC provides an overview of the 15 strategies it has identified as being the most effective in preventing dropout. NDPC also contracts with school districts and communities to assess and review the dropout prevention programs in the school district and make recommendations for improvement. Much of this information and additional information on annual national conferences and professional development services are available on the center’s website: www.dropoutprevention.org. NCES, part of Education’s Office of Educational Research and Improvement, is the primary federal entity for collecting, analyzing, and reporting data on the condition of education in the United States. Since 1989, NCES has annually published data on high school dropout statistics. NCES’ most recent publication provides national level data for three measures—event and status dropout rates and high school completion rates. Periodically, NCES also reports on cohort dropout rates. NCES also reports dropout rates for groups with various characteristics (e.g., sex, ethnicity, age, and recency of immigration). Nationally, dropout rates changed little in the 1990-2000 period. Rates varied considerably, however, depending on the geographic region and ethnic group. The highest dropout rates occurred in the South and West, while the Midwest and Northeast tended to have lower rates. Dropout rates were much higher for Hispanics than for other ethnic groups, affected primarily by the very high dropout rates for Hispanics born outside the United States. Dropout figures also vary depending on which dropout or school completion measure is used, primarily because calculations use different age groups, data, or definitions of dropout. No one measure is appropriate for all situations. Those using dropout or completion data must familiarize themselves with the various measures and select the one that best meets their needs. For the nation as a whole, dropout rates changed little in the 1990-2000 period. Data compiled by NCES indicates that the percentage of 16- through 24-year-olds who were dropouts ranged between 10.9 and 12.5 percent. While the year-to-year results went up in some years and down in others, the net result was a decline of 1.2 percentage points during this time period. Dropout rates show considerable variation when broken down by region or by ethnic group. The highest dropout rates occurred in the South and West, while the lowest rates occurred in the Northeast and Midwest. As figure 2 shows, while the national portion of 16- through 24-year-olds that were dropouts was 10.9 percent in October 2000, the regional average ranged from 12.9 percent in the South to 8.5 percent in the Northeast. Analyzed by ethnic group, dropout rates were higher for Hispanics than for other ethnic groups, as shown in figure 3. For example, in 2000, the Hispanic dropout rate was 27.8 percent compared with 6.9 percent and 13.1 percent for white non-Hispanics and black non-Hispanics, respectively. Asian/Pacific Islanders had the lowest dropout rate, 3.8 percent, in 2000. However, due to the relatively small sample sizes, reliable estimates for Asian/Pacific Islanders could not be calculated before 1998, so they are not shown separately in the trend lines in figure 3. In addition, sample sizes were too small for NCES to calculate dropout rates for American Indians/Alaskan Natives in any year. Further analysis offers additional insight into the high dropout rate for Hispanics. Compared to non-Hispanics in the United States, a much higher percent of Hispanic children were born outside the United States—43.6 percent versus 6.5 percent. The dropout rate for Hispanics born outside the United States was much higher than that for Hispanics born in the United States in 2000 (44.2 percent vs. 15.2 percent). As a result, although Hispanics born outside the country accounted for only 6.6 percent of all 16- through 24-year-olds, they accounted for more than a quarter of all dropouts in 2000 and thus significantly raised the overall Hispanic dropout rate and the national dropout rate. In addition, data from 1995 show that more than half (62.5 percent) of the foreign-born Hispanic youths who were dropouts had never enrolled in a U.S. school, and 79.8 percent of these young adults who had never enrolled in U.S. schools were reported as either speaking English “not well” or “not at all.” The high dropout rates for Hispanics also affect the state differences in high school completion rates. As table 2 shows, the states with the highest rates of high school completion among 18- through 24-year-olds (Alaska, Maine, and North Dakota) have very small percentages of Hispanics, while the states with the lowest rates of high school completion among 18- through 24-year-olds (Arizona, Nevada, and Texas) have very large percentages of Hispanics. Our analysis of the state-by-state information for all 50 states and the District of Columbia shows that two factors— Hispanics as a percent of 18- to 24-year-olds in 1999 and the percentage increase in Hispanics under 18-years-old in the 1990s—account for about 40 percent of the variation in the high school completion rates between states. Analyzing dropout rates is made more complicated by the fact that multiple ways exist to measure the extent of dropping out—and no one measure is ideal for all situations. For example, one way to measure dropouts is to determine the percentage of students that drop out in a single year. This measure is referred to as an event dropout rate. NCES’ event dropout rate measures the number of 15- through 24-year-olds that drop out of grades 10-12 in the past year without completing a high school program. While such a measure can be used to spot dropout trends on a year-to-year basis, it does not provide an overall picture of what portion of young adults are dropouts. If the concern is whether the total population of dropouts is growing, shrinking, or staying about the same, a different measure is needed. Several ways exist to measure the portion of young adults who are dropouts rather than the percentage who drop out in any given year. In one such approach, referred to as the status dropout rate, NCES measures the percentage of all persons from 16- through 24-years-old who are not enrolled in school and have not earned a high school credential, including those who never attended school in the United States. A similar but somewhat different measure is the high school completion rate. NCES’ completion rate measures the percentage of 18- through 24-year-olds who are no longer in school and have a high school diploma or an equivalent credential, including a General Education Development (GED) credential. The status dropout rate and the completion dropout rate differ because they are based on different populations. Only the status dropout rate calculation includes 16- and 17-year-olds and those 18- through 24- year olds who are still enrolled in a high school program. Because of these differences, the status dropout rate and the high school completion rate are not the simple inverse of each other. Another approach, called the cohort dropout rate, uses repeated measurements of a single group of students to periodically report on their dropout rate over time. Further complicating the picture, most of the types of dropout measures have at least two rates, which differ because they are based on different age groups, data, or definitions of dropouts. For example, some rates use data for a single year while others use a 3-year average, and some count GED recipients as graduates while others do not. (See app. II for descriptions of each of the published dropout and completion measures we identified.) Different measures can be used separately or together to examine various dropout trends. For example, figure 4 shows the event dropout rate, the status dropout rate and the high school noncompletion rate. The event dropout rate, which measures only those youth who drop out in a single year, is lower than the other two measures which deal with the percentage of dropout in an age group regardless of when they dropped out. The event dropout rate rose slightly—0.8 percentage point—between 1990 and 2000. However, this change was not statistically significant. The noncompletion rate and the status dropout rate showed similar patterns during the 10-year period, with the noncompletion rate declining 0.9 percentage point and the status rate declining 1.2 percentage points during the period. However, as mentioned earlier, these two rates differ, in part because they are based on different age groups. Another high school completion measure is the “regular” high school completion rate. This rate is the number of public high school seniors who earn a regular diploma in a given year stated as a percent of the number of entering freshman 4 years earlier. For example, in the 1998-1999 school year, public high schools awarded 2,488,605 regular high school diplomas. This number was 67.2 percent of the 3,704,455 students who began the ninth grade 4 years earlier in the fall of 1995. Like all the other dropout measures we identified, the regular graduation rate has its uses, but no one measure is appropriate for all situations. As a result, users of dropout and completion data must familiarize themselves with the many measures available and select the measure or measures which best meet their needs. Research has shown that multiple factors are associated with the likelihood of dropping out. Education and private research organizations have identified two main types of factors associated with the likelihood of dropping out—one type involving family characteristics and the other involving students’ experiences in school. For example, students from low-income, single-parent, and less-educated families drop out at a much higher rate than other students. Similarly, low grades, absenteeism, disciplinary problems, and retention for one or more grades are also found at much higher-than-average rates among students who drop out. However, identifying students likely to drop out is not just a matter of identifying students with high-risk characteristics, because research shows that dropping out is often the culmination of a long-term process of disengagement that begins in the earliest grades. Study of this long-term pattern may offer ways to better and earlier identify potential dropouts. Research indicates that a number of family background factors, such as socioeconomic status, race-ethnicity, single-parent families, siblings’ educational attainment, and family mobility are correlated with the likelihood of dropping out. Of these factors, socioeconomic status, most commonly measured by parental income and education, bears the strongest relation to dropping out, according to the results of a number of studies. For example, an NCES longitudinal study of eighth graders found that while data show that blacks, Hispanics, and Native American students were more likely to drop out than white students, this relationship is not statistically significant after controlling for a student’s socioeconomic status. Studies have also found that dropping out is more likely to occur among students from single-parent families and students with an older sibling who has already dropped out than among counterparts without these characteristics. Other aspects of a student’s home life such as level of parental involvement and support, parent’s educational expectations, parent’s attitudes about school, and stability of the family environment can also influence a youth’s decision to stay in school. For example, results from the NCES study found that students whose parents were not actively involved in the student’s school, whose parents infrequently talked to them about school-related matters, or whose parents held low expectations for their child’s future educational attainment were more likely to drop out. Students’ past school performance is also related to the likelihood of dropping out. For example, research shows that students with a history of poor academic achievement, evidenced by low grades and poor test scores, are more likely to drop out than students who have a history of academic success. In addition, students who are overage for their grade or have repeated a grade are more likely to drop out. For example, one study found that students who had repeated a grade as early as kindergarten through fourth grade were almost five times as likely to drop out of school than those who had not. The odds of students who had repeated a later grade—fifth through eighth grade—of dropping out were almost 11 times the odds of those students who had never repeated these grades. Other school experiences related to dropping out include students having a history of behavior problems and having higher rates of chronic truancy and tardiness. Research also indicates that dropout rates are associated with various characteristics of the schools themselves, such as the size of the school, level of resources, and degree of support for students with academic or behavior problems. For example, a summary of the research on school size and its effect on various aspects of schooling, found that in terms of dropout rates or graduation rates, small schools tended to have lower dropout rates than large schools. Of the 10 research documents that were summarized, 9 revealed differences favoring or greatly favoring small schools, while the other document reported mixed results. Various research studies have focused on dropping out is a long-term process of disengagement that occurs over time and begins in the earliest grades. Early school failure may act as the starting point in a cycle that causes children to question their competence, weakens their attachment to school, and eventually results in their dropping out. For example, a study examining the first- to ninth-grade records for a group of Baltimore school children found that low test scores and poor report cards from as early as first grade forecast dropout risk with considerable accuracy. This process of disengagement can be identified in measures of students’ attitudes as well as in measures of their academic performance. Studies have shown that early behavior problems—shown in absenteeism, skipping class, disruptive behavior, lack of participation in class, and delinquency—can lead to gradual disengagement and eventual dropping out. For example, a report summarizing a longitudinal study of 611 inner- city school children found significant relationships between behavior problems in kindergarten through grade 3 and misconduct in the classroom at ages 14 and 15, future school disciplinary problems, police contacts by age 17, and subsequently higher dropout rates. Study of such long-term patterns that often lead to dropping out may offer ways to better and earlier identify potential dropouts. Local entities have implemented a variety of initiatives to address the factors associated with dropping out, ranging from small-scale supplementary services to comprehensive school reorganizations. These initiatives are limited in the degree to which they address family-related factors associated with dropping out, such as income; they focus mainly on student-related factors, such as low grades and absenteeism. While dropout prevention programs can vary widely, they tend to cluster around three main approaches: (1) supplemental services for at-risk students; (2) different forms of alternative education for students who do not do well in regular classrooms; and (3) school-wide restructuring efforts for all students. Several of the programs we reviewed have conducted rigorous evaluations, with others reporting positive outcome data on student progress and student behavior. States’ support of dropout prevention activities varies considerably with some states providing funds specifically for dropout prevention programs while others fund programs to serve at- risk youth, which may help prevent them from dropping out. Local entities have implemented a variety of initiatives to address the factors associated with dropping out of school. Our visits to 25 schools in six states—California, Florida, Nevada, Pennsylvania, Texas, and Washington—showed that initiatives in these schools cluster around three main approaches: (1) supplemental services for at-risk students; (2) different forms of alternative education, which are efforts to create different learning environments for students who do not do well in regular classrooms; and (3) school-wide restructuring efforts for all students. Individual programs may focus exclusively on one type of approach, or use a combination of approaches to address many of the student- and school-related factors associated with dropping out of school. Several of the programs we reviewed have conducted rigorous evaluations, and others are reporting positive outcome data on student academic progress and student behavior. Providing supplemental services to a targeted group of students who are at risk of dropping out is one approach used by many of the programs we visited. Some of the more common supplemental services include mentoring, tutoring, counseling, and social support services, which operate either during the school day or after school. These services aim to improve students’ academic performance, self-image, and sense of belonging. For example, Deepwater Junior High School in Pasadena, Texas, offers the Coca-Cola Valued Youth Program, an internationally recognized cross-age tutoring program designed to increase the self- esteem and school success of at-risk middle and high school students by placing them in positions of responsibility as tutors of younger elementary school students. At Deepwater Junior High, officials told us that about 25 eighth graders tutor kindergartners through second graders at the local elementary school for 45 minutes a day, 4 days a week. Tutors are paid $5 a day for their work, reinforcing the worth of the students’ time and efforts. According to officials, the program has improved the tutors’ attendance in school, behavior, self-esteem, willingness to help, and sense of belonging. Another benefit of the program is its impact on students’ families, such as improved relationships between the tutor and his or her family and between families and the school. The Coca-Cola Valued Youth Program is also the subject of a 1992 rigorous evaluation that compared 63 Valued Youth Program tutors with 70 students in a comparison group. This evaluation showed that 2 years after the program began, 12 percent of the comparison students had dropped out compared with only 1 percent of the Valued Youth Program students. Average reading grades, as provided by reading teachers of tutors and comparison group students, were significantly higher for the program group, as were scores on a self-esteem measure and on a measure of attitude towards school. The Valued Youth Program has been widely replicated throughout the Southwest and elsewhere. At another school we visited—Rolling Hills Elementary in Orlando, Florida—officials told us that 85 percent of the students are on free or reduced-price lunches (which are served to lower-income children), and that the school provides multiple supplemental academic programs and social services to address many of the academic, personal, and social problems that are often associated with students likely to drop out of school. These programs and services include pre-school and kindergarten classes to help at-risk children become successful learners, two “dropout prevention” classes for students who are behind their grade level, after school tutoring classes, and a variety of social and counseling services. Progress reports are sent to parents to keep them informed of their child’s progress. The school also works with three full-time therapists who help students with their social and emotional problems. Teachers and staff monitor students’ attendance and identify early on those with attendance problems. This monitoring effort has resulted in improved student attendance. School officials emphasized the importance of identifying at an early age children who are likely to become academic underachievers, truants, or likely to develop behavioral problems, and the need to develop programs to address the academic and behavior needs of these children. Although longitudinal studies looking at the effects of these services over time would be needed to determine the effectiveness of Rolling Hills’ early intervention program at preventing students from dropping out, research suggests that early identification and intervention can help counteract the process of disengagement and withdrawal from school. Another form of supplemental services provided by schools we visited is school-community partnerships. While a variety of approaches are used by school officials to create school-community partnerships, the partnerships we reviewed focused on providing an array of supportive services to students and their families, including mental health counseling, health care, adult education, and recreation programs. For example, the Tukwila School District in Tukwila, Washington, aims to improve student achievement in school by focusing on school, family, and community collaborations. According to officials, the District offers mentoring and tutoring programs, internships, and an array of health and social services. By building partnerships with state and federal agencies, nonprofits, and other organizations, the District hopes to maximize resources in ways that would strengthen young people and their families. A longitudinal study of the District’s program during the 1994-1996 school years found that 58 percent of the elementary students who received human services from district service providers and/or community agencies had higher grades than a control group of students who did not receive services, and 74 percent of secondary school students receiving services had improved their course completion rates after two semesters of service. The second approach commonly used by localities we visited is to provide alternative educational environments for students who do not do well in the regular classroom. These alternative learning environments attempt to create a more supportive and personalized learning environment for students to help them overcome some of the risk factors associated with dropping out, such as school disengagement and low attachment to school. Alternative learning environments can either operate within existing schools or as separate, alternative schools at an off site location. Alternative environments operating within regular schools can include small groups of students meeting each day to work on academic skills in a more personal setting, or smaller schools housed within the regular school, such as ninth grade or career academies which focus on a specific group of students or offer a curriculum organized around an industry or occupational theme. Alternative schools located off site are generally smaller schools than those the students otherwise would have attended. These smaller schools usually have smaller classes, have more teachers per student, and offer a more personalized learning environment for students. For example, the Seahawks Academy in Seattle, Washington, is a small alternative school for seventh, eighth, and ninth graders who have been unsuccessful in the traditional middle and high schools. According to officials, the academy is a partnership between Seattle Public Schools, Communities in Schools (CIS), the Seattle Seahawks football team, and corporate partners and strives to provide a safe, nurturing, and supportive learning environment for about 110 students. The school offers smaller class sizes, tutors, mentors, no cost health care, and social services. Students wear Seahawks Academy uniforms and must commit to strict behavior contracts signed by the student and parent. Officials told us that the Academy’s policies foster positive expectations and “Seahawks Academy culture,” teaching students to respect each other, teachers, and themselves. The Academy emphasizes attendance, academic achievement, and appropriate behavior. Evidence of program effectiveness includes improved test scores, fewer discipline problems, and no suspensions or expulsions for the last 2 school years compared with suspensions of about 7 percent and expulsions of about 0.5 percent at other schools in the district. Another example of an alternative learning environment is the Partnership at Las Vegas (PAL) Program at the Las Vegas High School in Las Vegas, Nevada. The PAL program is a school operating within the existing school with a school-to-careers curriculum that is designed to provide students with both academic and career-related skills to prepare them for entry into an occupation or enrollment in higher education. Officials said that by linking academic coursework to career-related courses and workplace experience, the PAL program aims to motivate students to stay in school and promote an awareness of career and educational opportunities after high school. According to officials, the program is made up of a team of 6 teachers and about 150 at-risk 11th and 12th grade students. Program participants attend classes 4 days a week and report to a work site for a nonpaid internship 1 day a week. The program features academic courses that stress the connection between school and work and include language arts, mathematics, social studies, science, and computer applications. Essential program aspects include business etiquette lessons, career speakers, field trips, business internships, developing peer and team affiliations, and constant monitoring and evaluation of student progress. According to officials, evidence of program effectiveness includes improved attendance and fewer discipline problems than non-PAL participants. In addition, the PAL program reports a dropout rate of about 2 percent for PAL participants, compared with a rate of 13.5 percent for non-PAL participants. While only one of the alternative programs we visited has been rigorously evaluated, the others are reporting positive outcomes in areas such as test scores and students’ behavior. For example, the Excel program at the Middle School Professional Academy in Orlando, Florida, an alternative school designed to meet the special needs of disruptive, expelled, and disinterested youth, reported substantial gains in mean grade point averages for students in the program. Officials also reported fewer discipline problems and a retention rate of 95 percent for the 2000-2001 school year. The Ranger Corps, at Howard Middle School in Orlando, Florida, a Junior Reserve Officers Training Corps (JROTC) program for about 50 seventh graders, also reported gains of about 15 percentage points in reading test scores as well as increased attendance and fewer disciplinary problems. The third type of approach used by local entities is school-wide restructuring efforts that focus on changing a school or all schools in the school district in an effort to reduce the dropout rate. School-wide restructuring efforts are generally implemented in schools that have many students who are dropout prone. The general intent of this approach is to move beyond traditional modes of school organization to make schools more interesting and responsive places where students learn more and are able to meet higher standards. Some researchers have suggested that these restructuring efforts have the potential to reduce dropping out in a much larger number of students by simultaneously addressing many of the factors associated with dropping out. An example of a school-wide restructuring effort is Project GRAD (Graduation Really Achieves Dreams) in Houston, Texas—a 12-year-old scholarship program that reports a track record of improving student academic performance and increasing graduation rates. The program was initially established in 1989 as a scholarship program, but in 1993, the program began implementing math, reading, classroom management, and social support curriculum models in a feeder system of schools (all the elementary and middle schools that feed students into a high school). According to officials, the program expanded its services to the elementary grades after program supporters recognized the need to begin intervention in the earliest grades for it to be more successful. Project GRAD emphasizes a solid foundation of skills in reading and math, building self-discipline, providing resources for at-risk children, and offering college scholarship support. Project GRAD has reported demonstrating its effectiveness with higher test scores, higher graduation rates, greater numbers of scholarship recipients, and fewer disciplinary problems in the schools. For example, a 1999-2000 rigorous evaluation of the program showed that Project GRAD students outperformed students in corresponding comparison groups in math and reading achievement tests and made substantial gains in college attendance. The success of Project GRAD has led to its expansion into three additional feeder systems in Houston, with a 5-year plan to expand into two more feeder systems. The model is being replicated in feeder systems in Newark, Los Angeles, Nashville, Columbus, and Atlanta. Another example of a school-wide restructuring effort is the Talent Development program in Philadelphia, Pennsylvania—a comprehensive high school reform model that aims to improve large high schools that face serious problems with student attendance, discipline, achievement scores, and dropout rates. This model has been implemented in four Philadelphia high schools and approved for implementation in two others. We visited three high schools in Philadelphia that use this approach. According to officials, these schools provide or are in the process of implementing a separate academy for all ninth graders, career academies for 10th through 12th graders, and an alternative after-hours twilight school for students who have serious attendance or discipline problems. Block scheduling, whereby students take only four courses a semester, each 80 to 90 minutes long, and stay together all day as a class, is used in each school. The longer class periods enable teachers to get to know their students better and to provide times for individual assistance. A report on the outcomes of this model at two schools showed that the percentage of students promoted to the tenth grade has increased substantially, and the number of suspensions has dropped dramatically. The report also indicated that students had significant gains on standardized achievement tests in math and improved student attendance. The career academy model implemented at Talent Development schools and other high schools we visited has been the subject of in-depth evaluations. Career academies represent the high school reform movement that is focused on smaller learning communities. Academy components include rigorous academics with a career focus, a team of teachers, and active business involvement. Extensive evaluations on the academies indicate a positive impact on school performance. For example, in a 10-year, ongoing national evaluation of nine career academies,evaluators compared the performance of 959 students who participated in career academies and 805 similar students who applied to but did not attend an academy. The evaluation also has a long follow-up period, which extends 4 years beyond the students’ scheduled graduation from high school. One report from the evaluation found that among students at high risk of school failure, career academies significantly cut dropout rates and increased attendance rates, number of credits earned toward graduation, and preparation for postsecondary education. A follow-up report issued in December 2001 stated that although the career academies enhanced the high school experiences of their students, these positive effects did not translate into changes in high school graduation rates or initial transitions to post-secondary education and jobs. For example, some of the students at high risk of school failure obtained a GED instead of graduating. The report also notes that the full story of career academy effectiveness is still unfolding and that longer-term results should be examined prior to making definitive judgments about the effectiveness of the approach. Many states have dropout prevention programs or programs that serve at- risk youth that may help prevent them from dropping out of school. Specifically, our calls to 50 states and the District of Columbia found that 14 states have statewide dropout prevention programs, and 29 other states and the District of Columbia have programs to serve at-risk youth that may help prevent them from dropping out of school. Seven states have no statewide programs identified to prevent dropout or serve at-risk youth. Services provided by dropout prevention programs and programs that serve at-risk youth may be similar. However, the number of school districts served and the scope of services offered by either type of program varies greatly by state. Some states provide dropout prevention services in each of the states’ districts, while others have dropout prevention programs that serve only a limited number of school districts. For example, Tennessee awards $6,000 dropout prevention grants to only 10 of its 138 school districts annually. The following examples illustrate how states implement their dropout prevention and at-risk programs: The official dropout prevention programs implemented in California, Texas, and Washington vary in their form and funding. One of California’s four dropout prevention programs, the School-Based Pupil Motivation and Maintenance Program, provides $50,000 per school to fund a school dropout prevention specialist (outreach consultant) at 300 schools in about 50 school districts each year. The outreach consultants work to provide early identification of students at risk of failing or dropping out and then coordinate the resources and services of the whole school and surrounding community to identify and meet the needs of these children so they can succeed and stay in school. Texas’ dropout prevention program, the State Compensatory Education (SCE) Program, provides state funds to schools that have a large percentage of at-risk students (i.e., students with many of the characteristics associated with dropping out). The SCE program funds services such as supplemental instruction or alternative education with the goal of enabling students not performing at grade level to perform at grade level at the conclusion of the next regular school term. In addition, each district is responsible for developing a strategic plan for dropout prevention. Washington changed its dropout prevention program’s focus in 1992 from targeted dropout prevention services to a comprehensive, integrated approach to address many of the factors associated with the long-term process of disengagement from school that often begins in the earliest grades. Washington uses about 15 state programs to help prevent students from dropping out, including programs emphasizing early intervention, schools-within-schools, and community partnerships. How state funds are used to meet state education objectives is largely left up to the school districts. Georgia, the District of Columbia, and Utah have no statewide dropout prevention programs, but instead offer comprehensive programs to serve at-risk students. Georgia’s comprehensive approach to serving at-risk students provides different services to students of different ages. For example, Georgia has an Early Intervention program for students in kindergarten through third grade, a reading program for students in kindergarten through second grade, and Alternative Education for students who are academically behind and disruptive. State funds are allocated to alternative schools based on a formula grant process. The District of Columbia also takes a comprehensive approach to preventing students from dropping out through a variety of services targeted to at-risk students. Programs include Head Start; after school programs; school counseling; community service; alternative schools that offer small classes, career readiness, testing, and counseling; and a program to apprehend truant students and provide them with counseling and referral services. Federal and District dollars are used to fund these programs. Utah offers a number of programs to serve at-risk students. Programs include alternative middle schools, gang intervention, and homeless/disadvantaged minorities programs. These programs provide mentoring, counseling, and health services to students, and state funds are awarded to school districts through both competitive and formula grants. The Dropout Prevention Demonstration Program (DPDP)—funded at $5 million for fiscal year 2001—is the only federal program that has dropout prevention as its sole objective; because the program is new, the Department of Education has not yet evaluated its effectiveness.However, other federal programs are also used by local entities to provide dropout prevention services. For example, five federal programs have dropout prevention as one of their multiple objectives and several more programs—such as Safe and Drug-Free Schools and 21st Century Community Learning Centers—serve at-risk youth even though dropout prevention is not the programs’ stated goal. Reducing the dropout rate is not a stated program goal of most current programs, and thus assessing how effective the current federal programs have been in reducing the dropout rate is very difficult given that very few programs have been evaluated in terms of their effects on the dropout rate. Prior evaluations of the SDDAP—which have measured program effect on dropout rates—showed mixed results. Although some experts and state and local officials did not believe the creation of additional federal dropout programs was warranted, some of these officials suggested a central source of information on the best dropout prevention practices could be useful to states, school districts, and schools. Currently, the only federal program that has dropout prevention as its sole objective is the DPDP. In fiscal year 2001, the Congress appropriated $5 million for the program. The program, in turn, awarded 13 grants of between $180,000 and $492,857 to 12 local education agencies (LEAs) and one state education agency (SEA) with dropout rates of at least 10 percent. These grant recipients are to work in collaboration with institutions of higher education or other public or private organizations to build or expand upon existing strategies that have been proven effective in reducing the number of students who drop out of school. The Stephens County Dropout Prevention Project in Toccoa, Georgia, for example, was awarded $441,156 to screen all 2,400 students in Stephens County in grades 6 to 12 to determine specific needs based on at-risk traits. The project seeks to significantly reduce suspension, grade retention, and repeat offenses leading to expulsion and referrals to the court system through partnerships with the Communities in Schools of Georgia, the National Dropout Prevention Center, and the Department of Juvenile Justice. Another grant recipient, a tribal school located in Nixon, Nevada, was awarded $180,000 to assist approximately 200 Native American students in grades 7 to 12 who have not succeeded in a traditional public school setting to remain or return to high school and graduate by developing individualized education plans. In addition to DPDP, we identified five programs that have dropout prevention as one of their multiple objectives, with total funding of over $266 million from three federal agencies. In fiscal year 2000, Education received appropriations of $197.5 million to fund three of these programs, and the Department of Justice and the Department of Labor received total appropriations of $69.2 million to fund their programs. Two programs account for most of these funds: Talent Search and School-to-Work. Education’s Talent Search program, funded at $100.5 million in fiscal year 2000, provides academic, career, and financial counseling to its participants and encourages them to graduate from high school and continue on to the postsecondary institution of their choice. Education and Labor, who jointly administer the School-to-Work Opportunities Act of 1994, each contributed $55 million in fiscal year 2000. This program’s goal is to provide students with knowledge and skills that will allow them to opt for college, additional training, or a well-paying job directly out of high school. Education’s Title I, part D program, funded at $42 million in fiscal year 2000, provides grants to SEAs for supplementary education services to help youth in correctional and state-run juvenile facilities make successful transitions to school or employment upon release. Two smaller programs that also have dropout prevention as one of their goals are Justice’s Juvenile Mentoring Program (JUMP) and Labor’s Quantum Opportunities Program (QOP). JUMP was funded at $13.5 million in fiscal year 2000 and aims to reduce juvenile delinquency and gang participation, improve academic performance, and reduce the dropout rate through the use of mentors. Labor allocated $650,000 to QOP in fiscal year 2000 and states that its program goals include encouraging students to get a high school diploma, providing post-secondary education and training, and providing personal development courses. Twenty-three other federal programs serve at-risk youth, although dropout prevention is not the programs’ stated goal. (See app. III for a complete list of these programs.) Safe and Drug Free Schools and 21st Century Community Learning Centers are examples of such programs. Education’s Safe and Drug Free Schools Program, funded at $428.6 million in fiscal year 2000, works to prevent violence in and around schools and to strengthen programs that prevent the illegal use of alcohol, tobacco, and drugs. Education’s 21st Century Community Learning Centers Program, funded at $453 million in fiscal year 2000, enables schools to stay open longer and provide a safe, drug-free, and supervised environment for homework centers, mentoring programs, drug and violence prevention counseling, and recreational activities. None of the five programs for which dropout prevention is an objective track the portion of funds used for dropout prevention. However, many state and local officials informed us that they use one or more of these and the other 23 federal programs that serve at-risk youth to address the factors that may lead to students dropping out. The use of programs such as these for dropout prevention is consistent with a recent NDPC recommendation that dropout prevention proponents should look beyond traditional dropout prevention program funding and seek funds from programs in related risk areas, such as teenage pregnancy prevention, juvenile crime prevention, and alcohol and drug abuse prevention to identify and secure grant funding sources. Since DPDP grants were just awarded in September 2001, Education has not been able to evaluate the program’s effect on the dropout rate. In addition, most federal programs that address dropout prevention have other goals, and the measurement of these goals takes precedence over measuring the program’s effect on the high school dropout rate. For example, programs that promote post-secondary education as their major goal, such as Talent Search, measure the program’s effect in assisting program participants enroll in college rather than what portion of participants complete high school. Also, because many federal programs provide funds for states and localities to administer programs, responsibility for evaluating and measuring the effectiveness of programs is also devolved to the state and local level. For example, Education’s Title I Neglected and Delinquent Program mostly administers the distribution and allocation of funds to states. While many of the programs it funds list dropout prevention as one of their intended goals, states are not required to report on their program’s effect on dropout rates. The three major evaluations of the former dropout prevention program— Education’s SDDAP which funded demonstrations from 1988-1995—have shown mixed results. A study of 16 targeted programs showed programs that were intensive in nature and that were operating in middle school could improve grade promotion and reduce school dropout rates. However, the same study showed that programs implemented in high school did not affect personal or social outcomes that are often correlated with dropping out (e.g., student’s self-esteem, pregnancy, drug use, and arrest rates). The study’s authors concluded that dropout prevention programs are more effective when implemented in earlier grades. A second study of SDDAP programs, which focused on the impacts of school restructuring initiatives, concluded that restructuring would not, in the short term, reduce dropout rates. This study explained that school restructuring was often a lengthy process, and finding the true effect of such efforts on dropout rates could take longer than the 3- to 4-year period of most demonstration programs. This study also explained that although dropout rates were not reduced in schools that restructured, other outcomes such as school climate—the environment of the school and how teachers and students interact— and test scores often improved and that these improved outcomes could ultimately affect the dropout rate. Finally, the third study evaluated 16 programs and found promising strategies for reducing dropout rates at all levels of elementary and secondary education. The study found that at the elementary school level, in-class adult friends (trained volunteers or helpers), after-school tutoring, and enrichment exercises that are directly related to in-class assignments appeared to be effective approaches. At the middle school level, coordinated teaching strategies, flexible scheduling, heterogeneous grouping of students, and counseling services were found to be useful. At the secondary school level, the study found that paid-work incentives monitored by the school and tied to classroom activities were very successful for promoting school engagement. While all three studies of SDDAP programs identified some promising practices or strategies for preventing dropouts or addressing the factors associated with dropping out, none of the programs studied were consistently effective in significantly reducing dropout rates. State and local officials also had numerous suggestions for reducing the dropout rate. Several of them suggested that Education develop a central source of information on the best dropout prevention strategies. For example, an administrator at Independence High School in San Jose, California, asked that the federal government act as a clearinghouse for information about effective dropout prevention programs, provide a list of people that could be contacted to find out about these programs, and identify programs that could be visited to observe best practices for preventing dropouts. A consultant for the California Department of Education suggested that the federal government could develop model dropout prevention programs and publish information on programs that were successful. The At-Risk Coordinators in Arizona, Idaho, Maine, and New York made similar suggestions for a national clearinghouse or information on best practices for preventing students from dropping out. As mentioned earlier, NDPC is an organization that provides an NDPC-developed list of effective strategies and information on self- reported model programs on its website. However, the NDPC is completely self-funded through memberships, grants, and contracts and does not have sufficient resources to (1) disseminate information that is available on its database of promising dropout prevention programs and practices, or (2) thoroughly review programs included in its model program listing. Instead NDPC relies on its website to communicate about effective dropout prevention practices and its data are based on voluntary submissions of program descriptions and promising practices by its members and other experts in the dropout prevention field. While some dropout prevention program officials mentioned NDPC as a useful resource, they believe a more complete and current database of program descriptions and promising practices would better serve their needs. Although there have been many federal, state, and local dropout prevention programs over the last 2 decades, few have been rigorously evaluated. Those federally funded programs that have been evaluated have shown mixed results. Several rigorously evaluated local programs have been shown to reduce dropout rates, raise test scores, and increase college attendance. In addition, some state and local officials believe that they are implementing promising practices that are yielding positive outcomes for students, such as improved attendance and grades and reduced discipline problems, although their programs have not been thoroughly evaluated. Education could play an important role in reviewing and evaluating existing research and in encouraging or sponsoring additional research to rigorously evaluate the effectiveness of state and local programs. Subsequently, Education could disseminate the results of such research and information on the identified best practices for state and local use. Opportunities exist for Education to identify ways to collaborate with existing organizations, such as the NDPC, that are already providing some information on existing programs. As schools continue to look for ways to ensure all students succeed, such research and information could play a vital role in developing and implementing effective programs. We recommend that the Secretary of Education (1) evaluate the quality of existing dropout prevention research, (2) determine how best to encourage or sponsor the rigorous evaluation of the most promising state and local dropout prevention programs and practices, and (3) determine the most effective means of disseminating the results of these and other available studies to state and local entities interested in reducing dropout rates. We provided a draft of this report to the Department of Health and Human Services’ (HHS) Administration for Children and Families and the Department of Education. HHS had no comments. Education provided a response, which is included as appendix V of this report, and technical comments, which we incorporated when appropriate. Education agreed that dropping out is a serious issue for American schools, emphasized the importance of school improvement efforts in the No Child Left Behind Act of 2001, and provided additional information about relevant Education programs and activities. In response to our recommendations that Education evaluate the quality of existing dropout prevention research and determine how best to encourage or sponsor rigorous evaluation of the most promising state and local dropout prevention programs and practices, Education agreed that rigorous evidence is needed and said that it will consider commissioning a systematic review of the literature on this topic. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 3 days after the date of this letter. At that time we will send copies of this report to the Secretary of Education, appropriate congressional committees, and other interested parties. If you or your staff have any questions or wish to discuss this material further, please call me or Diana Pietrowiak at (202) 512-7215. Key contributors to this report are listed in appendix VI. To determine dropout rate trends and identify factors associated with dropping out, we obtained and reviewed reports, statistics, and studies developed by the National Center for Education Statistics (NCES), the Annie E. Casey Foundation, and the National Dropout Prevention Center (NDPC). We also obtained the papers presented at the Harvard University Dropouts in America symposium in January 2001 and subsequently made available on the Internet. In addition to interviewing officials at each of the entities listed above, we interviewed dropout prevention experts at universities, federal agencies, and private research organizations and obtained and reviewed their publications. To obtain information on the services offered by state, local, and private agencies to students who are at-risk of dropping out, we conducted site visits in six states—California, Florida, Nevada, Pennsylvania, Texas, and Washington. We selected these states because our analysis of the literature and discussions with key dropout prevention experts identified a variety of promising dropout prevention programs within these states in each of the major types of dropout prevention approaches—supplemental services for at-risk students, different forms of alternative education, and school- wide restructuring efforts. Between February and August 2001, we also conducted telephone interviews with state at-risk coordinators in all 50 states and the District of Columbia who were either identified by the NDPC or who were referred to us by state program administrators. From the telephone interviews, we determined, among other things, (1) whether the state had a dropout prevention program, (2) if the state had other programs for at-risk youths, and (3) if any evaluations had been made of the effectiveness of the state programs’ impact on reducing dropouts. Our review focused only on dropout prevention programs and efforts. We did not obtain information on dropout recovery programs that try to get dropouts to return to school or on programs designed to help dropouts get a General Education Development (GED) credential or other type of high school credential. As a result, our list of programs whose funding could be used to prevent dropouts in appendix III does not include programs aimed only at dropout recovery or helping dropouts to get a GED or other type of high school credential. To identify what federal efforts exist to address dropout prevention and if they have been proven effective, we interviewed officials from the U.S. Departments of Education, Labor, Justice, and Health and Human Services who manage programs that aid in reducing the dropout rate. We developed our initial list of federal dropout prevention programs through our literature review and updated the list with references made by the various federal program officials. We obtained information on how the programs operated, how funds were dispersed, how dropout prevention was prioritized, and whether or not the programs had been evaluated. We also reviewed evaluations of the federal School Dropout Demonstration Assistance Program (SDDAP), which funded local dropout prevention programs in fiscal years 1988-1995. Table 3 provides a description of each of the types of dropout and completion measures and the individual measures developed by each of three different organizations. Since 1989, the National Center for Education Statistics (NCES) has annually published a report on dropout rates, Dropout Rates in the United States. The most recent report includes status and event dropout rates and high school completion rates. Occasionally, the report includes cohort rates. Both a national and state status dropout rates are developed annually by the Annie E. Casey Foundation for its Kids Count Data Book. A second measure of school completion, the “regular” graduation rate, is occasionally published by the Center for the Study of Opportunity in Higher Education in Postsecondary Education Opportunity. Table 4 lists 23 federal programs that federal, state, and local officials identified as programs from which funds are used to serve at-risk youth, which in turn could help to prevent their dropping out. Thus, these programs provide funds that can be used for dropout prevention activities. Completion rate (percent) State Alabama Alaska Arizona Arkansas California Colorado Connecticut Delaware District of Columbia Florida Georgia Hawaii Idaho Illinois Indiana Iowa Kansas Kentucky Louisiana Maine Maryland Massachusetts Michigan Minnesota Mississippi Missouri Montana Nebraska Nevada New Hampshire New Jersey New Mexico New York North Carolina North Dakota Ohio Oklahoma Oregon Pennsylvania Rhode Island South Carolina 81.6 93.3 73.5 84.1 82.5 81.6 91.7 91.0 88.0 84.6 83.5 91.8 86.4 87.1 89.4 90.8 90.4 86.2 82.1 94.5 87.4 90.9 89.2 91.9 82.3 92.6 91.1 91.3 77.9 85.1 90.1 83.0 86.3 86.1 94.4 87.7 85.7 82.3 89.0 87.9 85.1 Completion rate (percent) In addition to those named above, Susan Chin, Amy Gleason Carroll, Jeffrey Rueckhaus, Charles Shervey, and Anjali Tekchandani made key contributions to this report. Alexander, Karl, Doris Entwisle and Nader Kabbani, The Dropout Process in Life Course Perspective: Part I, Profiling Risk Factors at Home and School, Johns Hopkins University, Baltimore, Maryland, 2000. Cardenas, Jose A., Maria Robledo Montecel, Josie D. Supik, and Richard J. Harris, The Coca-Cola Valued Youth Program: Dropout Prevention Strategies for At-Risk Students, Texas Researcher, Volume 3, Winter 1992. Cotton, Kathleen, School Size, School Climate, and Student Performance, School Improvement Research Series, Close-Up #20, Northwest Regional Educational Laboratory, 1997. Dynarski, Mark, Philip Gleason, Anu Rangarajan, Robert, Wood, Impacts of Dropout Prevention Programs, Final Report, Mathematica Policy Research, Inc., Princeton, New Jersey, 1998. _____, Impacts of School Restructuring Initiatives, Final Report, Mathematica Policy Research, Inc., Princeton, New Jersey, 1998. Finn, Jeremy D., Withdrawing From School, Review of Educational Research, Summer 1989, Volume 59, Number 2. Gleason, Philip, Mark Dynarski, Do We Know Whom To Serve?, Issues in Using Risk Factors to Identify Dropouts, Mathematica Policy Research, Inc., Princeton, New Jersey, June 1998. Greene, Jay P., High School Graduation Rates in the United States, Center for Civic Innovation at the Manhattan Institute for Policy Research, November 2001. Kemple, James J., Career Academies: Impact on Students’ Initial Transitions to Post-Secondary Education and Employment, New York: Manpower Demonstration Research Corporation, December 2001. Kemple, James J., Jason C. Snipes, Career Academies: Impact on Students’ Engagement and Performance in High School, Manpower Demonstration Research Corporation, New York, 2000. Kaufman, Philip, Denise Bradby, Characteristics of At-Risk Students in NELS:88, U.S. Department of Education, National Center for Education Statistics, NCES 92-042,Washington, D.C., 1992. Kaufman, Phillip, Jin Y. Kwon, Steve Klein, Christopher D. Chapman, Dropout Rates in the United States: 1998, U.S. Department of Education, National Center for Education Statistics, NCES 2000-022,Washington, D.C., November 1999. Kaufman, Phillip, Martha Naomi Alt, Christopher D. Chapman, Dropout Rates in the United States: 2000, U.S. Department of Education, National Center for Education Statistics, NCES 2002-114,Washington, D.C., November 2001. McMillen, Marilyn, Dropout Rates in the United States: 1995, U.S. Department of Education, National Center for Education Statistics, NCES 97-473, Washington, D.C., July 1997. Mortenson, Thomas G., High School Graduation Trends and Patterns 1981 to 2000, Postsecondary Education Opportunity, June 2001 Rossi, Robert J, Evaluation of Projects Funded by the School Dropout Demonstration Assistance Program, Final Evaluation Report, American Institutes for Research, Palo Alto, California, 1993. Slavin, Robert E., Olatokumbo S. Fashola, Show Me the Evidence! Proven and Promising Programs for America’s Schools, Corwin Press, Inc., 1998. U.S. General Accounting Office, At-Risk and Delinquent Youth: Multiple Federal Programs Raise Efficiency Questions (GAO/HEHS-96-34, Mar. 6, 1996). _____, At-Risk Youth: School-Community Collaborations Focus on Improving Student Outcomes (GAO-01-66, Oct. 10, 2000). _____, Hispanics’ Schooling: Risk Factors for Dropping Out and Barriers to Resuming Education (GAO/PEMD-94-24, July 27, 1994). _____, School Dropouts: Survey of Local Programs (GAO/HRD-87-108, July 20, 1987). Wirt, John, Thomas Snyder, Jennifer Sable, Susan P. Choy, Yupin Bae, Janis Stennett, Allison Gruner, Marianne Perie, The Condition of Education 1998, U.S. Department of Education, National Center for Education Statistics, NCES 98-013, Washington, D.C., October 1998. | The National Center for Education Statistics (NCES) reports that the national status dropout rate--the percentage of 16- through 24-year olds who are not enrolled in school and who lack a high school diploma or a high school equivalency certificate--fluctuated between 10.9 and 12.5 percent between 1990 and 2000. However, dropout rates have varied considerably between regions of the country and among ethnic groups. Research has shown that dropping out it is a long-term process of disengagement that begins in the earliest grades. NCES and private research organizations have identified two factors--an individual's family and his or her experience in school--that are related to dropping out. Various state, local, and private programs are available to assist youth at risk of dropping out of school. These programs range in scope from small-scale supplementary services that target a small group of students, such as mentoring or counseling services, to comprehensive school-wide restructuring efforts that involve changing the entire school to improve educational opportunities for all students. One federal program, the Dropout Prevention Demonstration Program, is specifically targeted to dropouts, but the program is new and the Department of Education has yet to evaluate its effectiveness. In September 2001, the program awarded grants to state and local education agencies working to reduce the number of school dropouts. Other federal programs have dropout prevention as one of their multiple objectives, and many more federal programs serve at-risk youth but do not have dropout prevention as a stated program goal. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The DI and SSI programs are the two largest federal programs providing cash assistance to people with disabilities. The DI program, established in 1956 by the Social Security Act, provides monthly cash benefits to workers with disabilities (and their dependents and survivors) whose employment history qualifies them for disability benefits. In 2002, SSA paid about $55.5 billion in DI benefits to 5.5 million workers with disabilities (age 18 to 64). SSI is a means-tested income assistance program created in 1972 that provides a financial safety net for individuals who are aged or blind or have other disabilities and who have low income and limited resources. Unlike the DI program, SSI has no prior work requirement. In 2002, SSA paid about $18.6 billion in SSI federal benefits to about 3.8 million people with disabilities (age 18 to 64). To be considered eligible for benefits for either SSI or DI as an adult, a person must be unable to perform any substantial gainful activity by reason of a medically determinable physical or mental impairment that is expected to result in death or that has lasted or can be expected to last for a continuous period of at least 12 months. Work activity is generally considered to be substantial and gainful if the person’s earnings exceed a particular level established by statute and regulations. To obtain disability benefits, a claimant must file an application online, by phone or mail, or in person at any of SSA’s field offices. If the claimant meets the non-medical eligibility criteria, the field office staff forwards the claim to the appropriate DDS office. DDS staff—generally a team composed of disability examiners and medical consultants—obtains and reviews medical and other evidence as needed to assess whether the claimant satisfies program requirements, and makes the initial disability determination. If the claimant is not satisfied with the decision, the claimant may ask the DDS to reconsider its finding. If the claimant is not satisfied with the reconsideration, the claimant may request a hearing before one of SSA’s federal administrative law judges in an SSA hearing office. If the claimant is still dissatisfied with the decision, the claimant may request a review by SSA’s Appeals Council. The 1954 amendments to the Social Security Act specified that disability determinations would be made by state agencies under individual contractual agreements with SSA. Under these agreements, SSA’s primary role was to fund the states’ disability operations. However, following criticism from GAO and others about the quality and uniformity of the disability determination process, Congress amended the Social Security Act in 1980 to strengthen SSA management of the disability programs and allow greater SSA control and oversight of the DDSs. The 1980 amendments directed SSA to issue regulations specifying performance standards and administrative requirements to be followed to assure effective and uniform administration of disability determinations across the nation. The regulations issued by SSA, which established the current federal-state relationship, allow SSA to remove the disability determination function from a state if the DDS fails to make determinations that meet thresholds for performance accuracy and processing time. SSA’s regulations give DDSs maximum managerial flexibility in meeting the performance standards, allowing them to retain substantial independence in how they manage their workforce. For example, under the regulations, the DDSs are to follow state personnel standards in selection, tenure, and compensation of DDS employees. As employees of the state, DDS staff are subject to the rules and regulations of each state’s individual personnel classification system. Classification systems generally categorize positions on the basis of job responsibilities and the knowledge, skills, and competencies required to perform them. Within a classification system, a group of positions that have sufficiently similar responsibilities are put in the same class. Arranging positions in classes with common levels of difficulty and responsibility makes it possible to set ranges of compensation for whole classes of jobs across multiple state agencies. Specifying the responsibilities of each position also allows the state to identify and develop effective hiring qualifications, promotion criteria, and training requirements. The development and operation of such a classification system depend upon the adequacy of information about individual positions. Within the federal-state relationship, each DDS reports to its own state government, usually to a parent agency such as the state vocational rehabilitation agency. DDS staff generally include a variety of positions, such as medical consultants, vocational specialists, quality assurance personnel, as well as disability examiners. The number of disability examiners varies substantially among the DDSs. Data from our survey show that the number of full-time permanent examiners in each DDS ranged from 9 to 529 at the end of fiscal year 2002. Our prior work has found that the examiner’s job—which involves working with medical consultants to determine impairment severity, ability to function, and disability benefit eligibility—requires considerable expertise and knowledge of complex regulations and policies. And according to the Social Security Advisory Board, changes in agency rules and in the types of disability claims received by the DDSs have made disability decision-making more subjective and difficult. In addition, as part of its efforts to reduce claims-processing times, SSA has been testing a new disability examiner position called the single decision-maker (SDM), which would expand an examiner’s authority to independently decide claimants’ eligibility for benefits. 20 DDSs are testing this new position. Qualification requirements for new examiner hires vary substantially among the states. While five DDSs require a master’s or a registered nursing degree for certain new examiner hires, figure 1 shows that over one-third of all DDSs can hire new examiners with either a high school diploma or less. In addition, data show that examiners in nearly one-half of all DDSs are covered by union agreements, and issues related to compensation levels, hiring and promotion procedures, and weekly hours worked are open to union negotiation in the majority of these DDSs. To enhance the skills of both new and experienced examiners, SSA provides a number of optional training tools to the DDSs, including written materials covering new examiner basic training, interactive video programs supplementing basic training and providing refresher training and updates on policy changes, and materials and presentations provided by the regional offices and SSA headquarters. However, states have primary responsibility for training examiners, and many DDSs adapt or supplement SSA’s training to meet their examiners’ training needs. DDSs generally provide new examiners with SSA’s basic examiner training, followed by extensive on-the-job training, including mentoring by experienced examiners who guide the less experienced examiners in becoming more proficient in the disability claims process. New hires generally are not considered fully proficient until after one to two years of experience. The DDSs’ ability to hire examiners is affected by both SSA and state government funding decisions and hiring policies. SSA determines the funding available for each DDS and advises the DDSs about the number of full-time equivalent staff supported by this funding, and SSA adjusts these levels throughout the fiscal year based on workload fluctuations and funding availability. Normally SSA allows DDSs to replace staff who leave the DDS as long as they remain within authorized staffing levels, but for over half of fiscal year 2003, SSA froze DDS hiring, preventing DDSs from hiring new examiners or replacing those who had left. SSA officials told us that the temporary freeze was necessary to ensure that SSA’s expenditures did not exceed authorized levels and to avoid future layoffs of DDS staff. DDSs also have experienced state government hiring restrictions in recent years. Despite full federal funding, under the current federal-state relationship, DDSs generally cannot spend funds for new personnel without the approval of their state governments. States currently are facing severe budget crises, causing them to cut their payrolls for most state government functions. When states use methods such as hiring freezes, reductions in force, and early retirement incentives to limit spending on state employee payrolls, these policies sometimes prevent DDSs from hiring and retaining examiners at levels authorized by SSA. In earlier reports, we have noted that SSA’s disability determination process is mired in concepts from the past and needs to be brought into line with the current state of science, medicine, technology, and labor market conditions. With other federal disability programs similarly structured around outmoded concepts, we designated modernizing federal disability programs—including SSA’s DI and SSI disability programs—as a high-risk area in 2003. (See appendix III for a list of GAO reports on modernizing federal disability programs.) We made this designation owing in part to SSA’s (1) outmoded concepts of disability, (2) lengthy processing times, and (3) decisional inconsistencies: SSA’s outmoded concepts of disability. While technological and medical advances and societal changes have increased the potential for some people with disabilities to participate in the labor force, few DI and SSI beneficiaries leave the disability rolls to work. Our prior work shows that, unlike some private sector disability insurers and social insurance systems in other countries, SSA does not incorporate into its disability eligibility assessment process an evaluation of what is needed for an individual to return to work. These private insurers and other social insurance systems have access to staff with a wide range of expertise to apply, not only in making eligibility decisions, but also in providing return- to-work assistance. We have recommended that SSA develop a comprehensive return-to-work strategy that integrates earlier identification of work capacities and the expansion of such capacities by providing return-to-work assistance for applicants and beneficiaries. Adopting such a strategy is likely to require fundamental changes to the disability determination process, as well as changes to staff skill mixes and areas of expertise. Lengthy processing times for disability claims. The disability claims process can be lengthy, with many individuals who appeal SSA’s initial decision waiting a year or longer for final decisions on their benefit claims. According to SSA, a claimant can wait as long as 1,153 days from initial claim through a decision from the Appeals Council. As one means of reducing its claims-processing time, SSA aims to eliminate backlogs for initial disability claims, hearings, and appeals by 2008. Nevertheless, growth in the disability claims workload is likely to exacerbate SSA’s claims-processing challenges: SSA expects the DI rolls to grow by 35 percent between 2002 and 2012. Inconsistencies in disability decisions. SSA has had difficulty ensuring that decisions regarding a claimant’s eligibility for disability benefits are accurate and consistent across adjudicative levels and locations, raising questions about the fairness, integrity, and cost of these programs. For example, the Social Security Advisory Board has shown wide variances among the DDSs in rates of allowances and denials of disability benefits. The Advisory Board has cited differences in state-established personnel policies such as salaries, training, and qualifications of disability examiners across the DDSs, along with state economic and demographic differences, as some of the key factors that may affect the consistency of disability decision-making. The Commissioner’s September 2003 testimony sets forth her long-term strategy for improving the timeliness and accuracy of the disability claims process and fostering return to work for people with disabilities. For example, to speed decisions for some claimants, the Commissioner intends to initiate an expedited decision for claimants with more easily identifiable disabilities, such as aggressive cancers. Under this new approach, special units located primarily in SSA’s regional offices would handle the expedited claims, leaving DDS examiners responsible for evaluating the more complex claims. The Commissioner’s strategy also aims to increase decisional accuracy by, among other approaches, requiring DDS examiners to develop more complete documentation of their disability determinations, including explaining the basis for their decisions. Beyond steps to improve the timeliness and accuracy of the process, the Commissioner also plans to conduct several demonstrations aimed at helping people with disabilities return to work by providing work incentives and opportunities earlier in the disability process. In addition, to improve the disability decision process, the Commissioner has implemented some shorter-term remedies while developing her longer-range strategies. For example, SSA is accelerating its transition to an electronic disability claims folder, through which the DDSs, the field offices, and the Office of Hearings and Appeals are to be linked to one another. The folder is being designed to transmit case file data electronically from one claims-processing location to another and to serve as a data repository—storing documents that are keyed in, scanned, or faxed. According to the Commissioner, successful implementation of the electronic folder is essential for improving the disability process. In our prior work, we have cautioned SSA to ensure that it has the right mix of skills and capabilities to support this major technological transition. Recognizing the importance of people to the success of any organization in managing for results, GAO designated strategic human capital management a government-wide high-risk area in 2001. In prior reports on this high-risk area, we identified strategic workforce planning as essential to effective performance and stated that it should be a priority of agency leaders. We also noted that effective workforce planning must be fully integrated with an agency’s mission and program goals and be based on accurate and comprehensive workforce data. We recently identified a few key principles for strategic workforce planning. These principles include involving top management, employees, and other key stakeholders in developing, communicating, and implementing the workforce plan; determining the critical skills and competencies needed to achieve current and future program goals, and developing strategies to fill identified gaps; building the capability necessary to address administrative, educational, or other requirements to support the workforce strategies; and monitoring and evaluating progress in meeting workforce goals and how well the workforce plan has contributed to reaching overall program goals. Congress has additionally recognized the importance of workforce planning and, in 2002, added to the Government Performance and Results Act a provision requiring agencies to include human capital strategies needed to meet their strategic goals in their annual performance plans. We have found that high-performing organizations use workforce planning as a management tool to develop a compelling case for human capital investments and to anticipate and prepare for upcoming human capital issues that could jeopardize accomplishment of the organizations’ goals. (See appendix III for a list of GAO reports on human capital management.) The DDSs face several key challenges in retaining disability examiners and enhancing their expertise: high turnover, difficulties in recruiting and hiring, and gaps in key knowledge and skill areas. The DDSs are experiencing high and costly turnover of examiners, which data from our survey show is fostered in part by stressful workloads and noncompetitive salaries. DDSs need to recruit and hire sufficient numbers of qualified new examiners to fill the vacancies resulting from the high turnover. Yet more than three-quarters of DDS directors reported recruiting and hiring difficulties. Directors said such difficulties were due in part to state- imposed personnel restrictions, such as state limits on examiner salaries and hiring. Finally, directors reported that many examiners need additional training in key analytical areas that are critical to disability decision-making, including assessing credibility of medical information, evaluating applicants’ symptoms, and analyzing applicants’ ability to function. Over half of all DDS directors responding to our survey said that examiner turnover in their offices was too high. Our analysis of data from our survey and from federal agencies shows that, over fiscal years 2000 through 2002, DDS examiner turnover was about twice that of Veterans Benefits Administration (VBA) disability examiners with responsibilities similar to those of DDS examiners. For example, DDS examiner turnover averaged 13 percent over fiscal years 2000 to 2002, compared with 6 percent for VBA disability examiners. (See table 1.) In addition, during the same period, the turnover rate of DDS examiners was substantially greater than that of all SSA employees as well as that of all federal government employees. DDS examiner turnover has been even higher among new hires: turnover of examiners hired in fiscal year 2001 was 25 percent, compared with 14 percent among all DDS examiners. Moreover, while it is typical for new hires to leave at higher rates than other employees, turnover of new DDS examiners was considerably higher than that of new VBA examiners, new SSA employees, and all new federal government employees in fiscal years 2000 and 2001. Our survey results also show that examiner turnover is particularly high in some DDSs. An examination of three-year averages (fiscal years 2000 to 2002) of DDS turnover rates showed that one DDS had a turnover rate of 43 percent, and a quarter of the DDSs had turnover rates of 20 percent or greater. (See fig. 2.) When we asked DDS directors in our survey about the consequences of turnover, they told us that examiner turnover increased hiring and training costs and hindered claims processing by decreasing overall examiner skill levels, and increasing examiner caseloads, claims-processing times, and backlogs, as follows: Increased hiring and training costs. Nearly two-thirds of all DDS directors reported in our survey that turnover had increased SSA’s recruiting, hiring, or training costs. Directors and other DDS officials explained in interviews why these costs had increased as a result of turnover. Some DDS directors said that they must invest time in reviewing applications, interviewing candidates, and making hiring decisions. They also said they have to provide inexperienced new hires with 12 to 18 months of extensive training and mentoring. SSA estimates the cost of turnover of its own employees at 1.5 times average annual salary. Using this rate, we estimate that the cost of DDS examiner turnover in fiscal year 2002 was in the tens of millions of dollars. Decreased overall examiner skill levels. Two-thirds of all DDS directors reported that losses of experienced staff due to turnover have decreased overall examiner skill levels. While SSA officials told us that one to two years of experience is generally required to become proficient in the examiner role, our survey data show that, in two-thirds of the DDSs, at least a quarter of examiners had two years or less experience at the end of fiscal year 2002. Increased examiner caseloads. Nearly two-thirds of all DDS directors we surveyed said turnover had increased examiner caseload levels. DDS directors and SSA officials explained in interviews and survey comments that the caseloads of examiners who leave the DDS have to be redistributed among those who remain. Some directors told us that these higher caseloads created a more stressful work environment for the remaining employees. Increased claims-processing times and backlogs. Our survey results showed that over one-half of all directors said that turnover had increased DDS claims-processing times and backlogs. DDS directors and SSA officials we spoke with explained that turnover increased claims- processing times because new examiners hired to fill vacancies are less productive due to their inexperience and time spent in training. These officials also told us that the productivity of experienced staff is lowered while they are training and mentoring the new examiners. SSA itself acknowledged the potential impact on service in a 2001 internal document. This document noted that the need to replace retiring managers, by drawing from an examiner pool already diminished by turnover, would further reduce the examiner ranks and exacerbate the challenge of processing the growing claims workload. In addition, we noted in a prior report that a majority of DDS directors expressed the view that examiner turnover is likely to jeopardize their ability to complete periodic reviews of beneficiaries’ disability status, known as continuing disability reviews, potentially contributing to backlogs of these reviews. When we asked DDS directors about causes of examiner turnover, more than two-thirds identified each of the following as contributing factors: (1) large examiner caseloads along with workplace stress, high production expectations, and highly complex work and (2) noncompetitive pay. High caseloads, stress, production expectations, and highly complex work. Over two-thirds of all DDS directors identified large examiner caseloads, a stressful workplace, high production expectations for the number of cases completed, and the highly complex nature of the work as factors contributing to examiner turnover. DDS directors explained in interviews that the combination of growth in the claims workloads and increasingly complex examiner responsibilities is making the examiner position more challenging and stressful. DDS directors also noted in our survey and in interviews that insufficient staffing had increased the caseloads and stress levels of their examiners. Nearly 9 out of 10 DDS directors surveyed reported that the number of examiners in their DDSs had not been sufficient for their workload in at least one of the past three fiscal years, and nearly all of these directors said that this understaffing had resulted in a more stressful work environment. Noncompetitive pay. Two-thirds of all directors stated that noncompetitive pay had contributed to examiner turnover. Our survey data showed that many state DDS examiners were paid substantially less than examiners employed by the federal DDS in 2002 despite comparable skills and experience. Specifically, all of the state DDSs for which we have data have average examiner salaries that are less than the federal DDS average salary, and over half of the DDSs (31) have an average examiner salary that is less than two-thirds of the federal DDS average salary. In addition, we found that DDS examiner salaries are substantially lower than those of VBA examiners nationwide. For example, the average salary for DDS examiners was $40,464 in 2002, compared with $49,684 for VBA examiners. Specifically, we found that average DDS examiner salaries are less than those of VBA examiners in 47 states. (See fig. 3.) Several DDS directors told us in interviews that examiners have left some DDSs to accept higher salaries in federal agencies, particularly in SSA offices. For example, our analysis of selected case data provided by two DDS directors showed that, between 2000 and 2003, 13 former examiners received pay increases ranging from 9 to 48 percent when they moved from their DDSs to positions in SSA offices. In addition to facing high turnover and growing caseloads, more than three-quarters of all DDS directors (43) reported experiencing difficulties over a three-year period in recruiting and hiring enough people who could become successful examiners. Of these directors, more than three- quarters said that such difficulties contributed to decreased accuracy in disability decisions or to increases in job stress, claims-processing times, examiner caseloads, backlogs, or turnover. For example, one SSA official explained that, because of state-imposed hiring restrictions, one DDS developed a large backlog of cases that negatively affected its productivity. When we asked DDS directors what made it difficult for their DDSs to recruit and hire, they said that the following factors, many of which were related to state personnel restrictions, made it moderately to much more difficult than it would be otherwise to recruit and hire: state limits on examiner salaries and other forms of compensation, restrictive job classification system for state employees, state-imposed hiring limitations or hiring freezes and lengthy time periods for the state to hire DDS examiners, and SSA-imposed hiring restrictions and budget allocations limiting DDS staffing levels. State limits on examiner salaries and other forms of compensation. More than two-thirds of all directors reported that state limits on examiner salaries hindered recruiting and hiring, and the same proportion reported that noncompetitive salaries were insufficient to recruit or retain staff with the skills necessary to assume enhanced examiner responsibilities. One DDS director noted in survey comments that the low entry-level salary for examiners in that particular state no longer attracted “…the caliber of employees needed to perform the increasingly complex job.” Another commented that, owing to noncompetitive salaries, job candidates “…who have the requisite combination of skills needed as a will find better offers of employment, either better pay or less workload stress.” And officials we spoke with in an SSA regional office said that low examiner salaries in still another DDS have meant that this DDS has been unable to recruit candidates with strong analytical skills. They noted that the DDS has, therefore, had difficulty training its new examiners in such challenging tasks as weighing the credibility of medical and other evidence. In addition to citing limits on salaries, more than half of all directors reported that state limits on other forms of compensation, such as performance-based pay and hiring bonuses, also contributed to recruiting and hiring difficulties. Restrictive job classification system. Nearly one-half of all DDS directors attributed difficulties in recruiting and hiring examiners to their restrictive state job classification systems. Close to a third of all states place disability examiners in the same classification as other positions— such as a vocational rehabilitation specialist—and some DDS officials we interviewed said this made it difficult to attract people with skills appropriate to the disability examiner position. State-imposed hiring limitations and lengthy time for hiring. Nearly one-half of all DDS directors cited state hiring limitations or hiring freezes—and more than one-third reported lengthy hiring processes—as impediments to acquiring qualified examiners. For instance, officials we interviewed in one DDS explained that their state government had capped the number of staff the DDS could hire. These officials noted that, while SSA was willing to fund hiring above that level, it could take three years to obtain the state legislature’s approval to increase the DDS staffing level. SSA officials told us that another DDS could only hire individuals who have taken a required state test. They explained that, because the state administers the test only two times a year, the requirement hampers DDS hiring efforts. SSA-imposed hiring restrictions and budget allocations. Close to two-thirds of all DDS directors said that, over the past three fiscal years, SSA-imposed hiring restrictions and budget allocations that limit DDS staffing levels have presented recruiting and hiring challenges for the DDSs. DDS managers explained in interviews and in survey comments that, given the one to two years it takes for an examiner to become fully trained, DDSs that are restricted from quickly replacing staff lost to attrition will not have sufficient numbers of experienced examiners to process future claims. In addition to high turnover and difficulties in recruiting and hiring, the DDSs are also experiencing gaps in key knowledge and skills areas. When we surveyed all DDS directors about specific knowledge and skill needs, nearly one-half said that at least a quarter of their examiners needed additional training or mentoring in each of the following areas to successfully assume expanded responsibilities under an enhanced examiner position in either the present or the future: assessment of an applicant’s symptoms and evaluation of the credibility of medical and other evidence, evaluation of the weight to be given to medical evidence from a treating assessment and documentation of an applicant’s ability to function, assessment of vocational factors, updates on policies and procedures, and assessment of childhood disabilities. Even for those 19 DDSs in our survey that were testing the enhanced examiner position at the time of our study, over half (11 DDSs) reported that at least a quarter of the examiners with expanded responsibilities needed additional training or mentoring in two or more of these same knowledge and skill areas, and eight of these directors reported needs in four or more of these areas. But regardless of whether a DDS was testing this enhanced position, these areas are critical to the examiner’s task of disability decision-making in general. Indeed, one DDS director explained in an interview that, while that DDS was not officially testing this position, over the last several years it had hired examiners who were able to function in a manner that was increasingly independent of the medical consultant. This director noted that, as a result, it was becoming more difficult to distinguish the responsibilities of the disability examiner from those of an examiner with enhanced authority. Moreover, under SSA’s new approach for improving the disability determination process, these same knowledge and skill areas will be even more critical as DDS examiners take responsibility for evaluating only the more complex claims and as they are required to fully document and explain the basis for their decision. DDS directors cited several obstacles to examiners receiving needed training or mentoring. These obstacles primarily involved high workload levels that limited the time available to either provide or receive training or mentoring. Specifically, more than 70 percent of all DDS directors reported that work demands impeded mentors from providing examiners with needed on-the-job training. In addition, about two-thirds of all DDS directors reported that either the large size of examiners’ caseloads or high expectations for completing those cases did not allow examiners enough time to attend training. And more than half of all directors cited high work levels as a barrier to examiners seeking mentoring assistance. Despite the workforce challenges facing them, a majority of DDSs do not conduct long-term, comprehensive workforce planning. Of the DDSs that engage in workforce planning that is longer-term, a majority have plans that lack key workforce planning strategies, such as those for recruiting, retention, or succession planning. Directors identified numerous obstacles to long-term workforce planning, such as a lengthy state process to approve DDS human capital changes. The majority of DDSs do not conduct long-term, comprehensive workforce planning. As figure 4 shows, more than half of all the DDSs have workforce planning time horizons of less than two years, and almost one-half have a time horizon of no longer than a year (the time horizon of SSA’s annual budget process for the DDSs). DDS directors who reported that their workforce planning time horizons are no longer than a year mainly rely on SSA’s annual budget process for the DDSs for their workforce planning. However, SSA officials told us in interviews that their budget process is not designed to serve as a long-term strategic workforce planning process. These officials said that the following strategies of comprehensive, long-term workforce planning are generally not part of the budget process but rather are left to the states: recruiting strategies, retention strategies, training and professional development strategies, compensation strategies, performance expectation and evaluation strategies, employee-friendly workplace strategies, succession planning and strategies for maintaining expertise in the long contingency plans, in the event that resource levels do not meet expectations. In addition, even among the 28 DDSs that engage in workforce planning that is longer-term than one year, the majority (18) lack one or more of these key workforce planning strategies. Furthermore, many DDSs do not collect the data needed to develop effective workforce plans. Although DDSs face high turnover and are expected by SSA to experience a retirement wave in the next decade, over half of all DDS directors said they had not made projections of expected retirements and other separations for examiners and related staff within the last two fiscal years. Although the majority of DDSs do not conduct comprehensive, long-term workforce planning, some state governments do engage in strategic workforce planning efforts that encompass DDS employees. For example, the state parent agency of one DDS has produced reports identifying the workforce risks faced by the DDS (such as a coming retirement wave) and has assisted the director with succession planning. However, ongoing studies of state government workforce planning efforts have found that formal strategic workforce planning is not taking place in all states. During an interview with several DDS directors, we were told that even states with sophisticated long-term workforce planning efforts are not necessarily focusing on ensuring that their DDSs have the workforces needed to accomplish SSA goals, such as reducing claims-processing times. DDS directors noted in interviews that they face unique challenges related to the federal-state relationship that compound the difficulties of planning for future workforce needs. We asked DDS directors in our survey to what extent they had experienced various factors that might make workforce planning more difficult than it would be otherwise. Directors identified the following as major obstacles to long-term workforce planning: Lengthy state processes to approve DDS human capital changes. Over half of all DDS directors said that lengthy state processes to approve DDS human capital changes made statewide DDS long-term workforce planning more difficult. For example, an SSA official said it took over a year to obtain approval to hire seven DDS staff due to a state hiring freeze. In addition, a 2001 audit by SSA’s Office of the Inspector General found that the parent agency of one DDS had failed to provide sufficient staffing resources, such as timely permission to fill vacancies, for the DDS to efficiently process its disability workload. Inconsistencies between state and SSA human capital policies. Two-thirds of all DDS directors reported that long-term planning is made more difficult than it would be otherwise due to inconsistencies between state and SSA human capital policies, such as those related to staffing levels. For example, a former DDS director we spoke with explained that directors have had difficulties planning for future needs because of discrepancies between hiring levels authorized by SSA and those approved by their states. One DDS director told us that after working for two years to obtain state approval to hire additional examiners initially authorized by SSA, the DDS lost permission from SSA to fill the positions. Directors’ concern that SSA does not incorporate DDS workforce plans when making resource decisions. When asked in our survey what makes long-term planning more difficult, over two-thirds of DDS directors reported their concern that SSA does not incorporate the DDSs’ workforce plans when making resource decisions. Moreover, 45 DDS directors responded that they had only some or no opportunity to factor future DDS human capital needs into SSA’s spending projections beyond the upcoming fiscal year. Several DDS officials explained in interviews that long-term planning seemed futile if SSA was not going to use the results of the DDS planning efforts when making resource decisions. SSA officials, however, told us that they consider input from the DDSs related to funding decisions on a regular basis. SSA officials explained that the agency must disperse funds within its own overall budget allocation, and this often does not allow for meeting all DDS funding requests. Uncertainty about future resource levels from SSA and state- imposed hiring restrictions or separation incentives. Over three- quarters of all DDS directors we surveyed reported that long-term planning is made more difficult by uncertainty about future resource levels from SSA, as well as uncertainty about resources needed to implement major changes in SSA policies, procedures, and systems. In addition, one- half of DDS directors surveyed said that DDS long-term workforce planning was made more difficult by uncertainty about state-imposed hiring restrictions or separation incentives. Insufficient time to attend to future problems and insufficient data for workforce planning. Three-quarters of all directors surveyed said that they had insufficient time to attend to future problems because of the need to focus on current human capital challenges. One DDS director said in an interview that the day-to-day demands of directors’ jobs, such as managing high caseloads and hiring and training new examiners, often prevent them from planning for future workforce needs. Other DDS directors and officials told us that, when planning does take place, it is generally crisis-driven and reactive rather than long-term and strategic. In addition, over half of the directors reported in our survey that insufficient data for workforce planning makes DDS long-term workforce planning more difficult. Moreover, DDSs that do not engage in workforce planning longer-term than one year were more likely than other DDSs surveyed to cite insufficient data and planning tools, such as statistical software and information technology systems, as challenges that make long-term workforce planning more difficult. SSA’s workforce efforts have not sufficiently addressed both present and future DDS workforce challenges. Neither SSA’s strategic plan, nor its annual performance plan, nor its workforce plan adequately addresses the human capital challenges facing the DDSs. In addition, in our survey, DDS directors reported being dissatisfied with the adequacy of the training that SSA provides to the DDSs. Beyond training, SSA has not consistently provided other human capital assistance across the DDSs and faces difficulties negotiating human capital changes, such as increases in examiner salaries, with state governments. Finally, SSA has not used the statutory authority it has to set standards for the DDS workforce. SSA has not developed a nationwide strategic workforce plan that addresses present and future human capital challenges in the DDSs. As shown in figure 5, SSA does recognize a need to have higher-skilled and better-compensated DDS employees. In addition, SSA’s strategic plan for 2003-2008 places a high priority on improving the accuracy and the timeliness of the disability decision-making process. While accomplishment of this objective depends to a great extent on the DDS workforce, the plan cautions that the DDSs, like SSA, will face a continuing challenge of hiring and retaining a highly skilled workforce in a competitive job market. Nevertheless, SSA’s strategic plan, as well as the agency’s annual performance plan and workforce plan, are all largely silent on the means and strategies the agency will use to recruit, develop, and retain a high-performing DDS workforce, even though the Government Performance and Results Act now requires agencies to include in their annual performance plans a description of the human capital strategies needed to meet their strategic goals. “The Agency’s focus on the front-end of the disability processes…required a corresponding investment in the SSA and DDS employees involved in those processes; both workforces needed to be higher skilled and compensated.” To deliver high-quality, citizen-centered service. To strategically manage and align staff to support SSA’s mission. “One of SSA’s highest priorities is to improve service to the public in the disability programs from the initial claim through the final... appeal... The length of time it takes to process these claims is unacceptable.” “SSA and the State DDSs will be faced with the continuing challenge of hiring and retaining a highly skilled and diverse workforce in what is expected to be a very competitive job market.” “SSA’s Future Workforce Transition Plan was created…as a requirement of SSA’s strategic plan to outline how SSA will transition from the workforce we have today to the workforce we will need in the future.” SSA officials said in interviews that SSA is no longer pursuing two proposed strategies for improving training for disability examiners. Absent any strategic workforce plan addressing DDS employees, SSA does not use data that it collects on the DDS workforces in a strategic manner. While SSA routinely gathers certain DDS employee data—such as salaries, turnover rates, and the number of new hires and experienced disability examiners—the agency primarily uses these data in connection with its annual budget process. Moreover, SSA does not regularly collect many other key indicators of DDS human capital performance, such as gaps in basic skills relative to specific competencies, despite SSA’s acknowledging the importance of investing in and retaining a skilled DDS workforce in the face of an anticipated retirement wave. When we asked SSA officials how workforce planning for the DDSs was conducted, they said that they consider DDS workforce matters to be, in general, a state government and DDS responsibility, particularly in light of the variations in state personnel systems and political concerns. One of these officials explained that SSA takes DDS workforce needs into account within SSA’s annual budget process and through the consultation that occurs between the DDSs and SSA’s regional offices. The regional office staff—and in particular, the disability program administrators assigned as SSA’s liaisons with each DDS—are responsible for providing human capital assistance to the DDSs as needed. However, as noted earlier, SSA’s annual budget process lacks key components of comprehensive, long-term workforce planning. In addition, officials we interviewed in one regional office said that they lacked the tools and the time to assist the DDSs with long-term strategic workforce planning, and SSA officials we spoke with questioned whether disability program administrators were sufficiently trained in strategic workforce planning techniques. Several regional office and former and current DDS officials we spoke with expressed a desire for greater SSA leadership in terms of long-term strategic workforce planning focusing on DDS human capital challenges. One of these officials observed that SSA is already active in a variety of DDS human capital areas—such as determining appropriate DDS staffing levels, imposing a nationwide DDS hiring freeze, and providing national human capital guidance for implementing the electronic disability initiative—and could appropriately assist with strategic workforce planning. DDS directors are dissatisfied with the adequacy of SSA-provided training. Specifically, when we asked DDS directors whether they found SSA’s training to be adequate to prepare examiners to be proficient in the claims process, half or more of the directors responded that they were dissatisfied with the adequacy of SSA’s training in each of the following knowledge and skill areas: medical knowledge about body systems (32 DDSs), specific knowledge about the disability program (30 DDSs), assessment of vocational factors (29 DDSs), basic claim development techniques (29 DDSs), evaluation of the weight to be given to medical evidence from a treating physician (28 DDSs), updates on policies and procedures (28 DDSs), assessment of childhood disabilities (28 DDSs), assessment of an applicant’s symptoms and evaluation of the credibility of medical and other evidence (27 DDSs), and use of computers and technologies (26 DDSs). Moreover, nearly half of the directors (25 DDSs) reported that they were dissatisfied with SSA’s basic training materials for new disability examiners, and over one-third (19 DDSs) reported dissatisfaction with training on the assessment and documentation of an applicant’s ability to function. In addition, nearly all DDS directors (49) reported that they had adapted (or wanted to adapt) SSA’s training in one or more of these knowledge and skill areas to make it adequate. When we asked these DDS directors why they had adapted or wanted to adapt SSA’s training, more than half cited each of the following reasons pertaining to the quality, completeness, and timeliness of SSA’s training approach as contributing factors: Training is too conceptual and not sufficiently linked to day-to-day case processing (44 DDSs). Training provides insufficient opportunity to interact with the trainer (40 DDSs). Training provides insufficient opportunity to practice skills taught (38 DDSs). Certain types of training over-rely on the interactive video training technology (37 DDSs). Training content is incomplete (32 DDSs). Training presenters lack effective presentation skills (31 DDSs). Training lacks sufficient written materials, such as handouts and desk aids (30 DDSs). Training is delivered too early or too late (28 DDSs). In interviews, DDS officials expressed some particular concerns about video training. Some DDS officials told us that, because presenters lack sufficient hands-on case-processing experience, the training that SSA provided through its video training technology was too theoretical. In addition, other DDS officials described SSA’s video training technology as not allowing sufficient opportunity for clarification and follow-up with the presenter. Some officials explained that technical problems with the technology impeded interaction with the trainers. For example, they told us that, while staff are supposed to be able to use a keypad to call in and question the presenters during a class broadcast, it is often difficult to obtain access to the presenters. Further, some former DDS officials said that SSA applies its video training technology to many types of instructional needs for which it may not be appropriate. Yet, in our prior work, we have noted that, to be effective, the training method used needs to be tailored to the nature of the training content. We asked SSA officials we spoke with to comment on the DDS directors’ views on the quality of SSA-provided training. While an SSA official explained that the video training technology helps SSA to provide consistent training across the entire country quickly, she acknowledged that the training is sometimes too general and explained that SSA is attempting to improve the presentations. SSA officials also told us that they tap the expertise of the DDS community, among other agency components, to help develop and improve training materials and identify training needs. However, despite such efforts, nearly 85 percent of all DDS directors reported in our survey that they would be able to spend fewer resources adapting SSA’s training for use in their individual DDSs if SSA were to improve the quality, completeness, and timeliness of its training. Our survey data show that, in fiscal year 2002, the 52 DDSs used, in total, the equivalent of nearly 150 full-time DDS employees in preparing and delivering examiner training related to disability claims processing. Moreover, staff resources devoted to training may constitute a significant portion of total examiner staff in some DDSs. To illustrate, the director of one DDS with 83 disability examiners reported in our survey using the equivalent of about 12 full-time employees in fiscal year 2002 to prepare and deliver examiner training. SSA and DDS officials explained in interviews that, while some larger DDSs have staff who are dedicated solely to training, smaller DDSs generally use their most experienced, and hence most productive, examiners to prepare training and deliver it to their staff. Beyond training, information from our survey and interviews shows that SSA has not consistently provided other human capital assistance across the DDSs and faces difficulties negotiating human capital changes, such as increases in examiner salaries, with state governments. SSA provides many types of human capital assistance to the DDSs through its regional offices and its headquarters. For example, SSA regional office officials we interviewed explained that they have attempted to persuade state governments to exempt examiners from state hiring restrictions and to reclassify DDS examiner positions and increase examiner salaries in light of new responsibilities. In addition to the assistance provided by regional offices, SSA officials said that SSA headquarters has provided human capital assistance to the DDSs, such as sponsoring a study that identified the knowledge, skills, and abilities required for the disability examiner position, among other positions. But in our survey of the DDS directors who said they wanted particular types of human capital assistance from SSA headquarters and its regions, more than half said that they had not received assistance in each of the following areas: help with regular nationwide surveys of examiners’ issues and concerns (32 out of 36 DDSs), help in negotiating increases in examiner salaries with state government officials (24 out of 36 DDSs), guidance on roles and responsibilities for examiners with enhanced responsibilities (22 out of 42 DDSs), help in designing training and developing training materials for examiners with enhanced responsibilities and the staff who will be supporting them (22 out of 42 DDSs), help with workforce planning, including projecting turnover and developing succession plans (21 out of 31 DDSs), guidance on how to determine which examiners have sufficient skills to take on enhanced examiner responsibilities (15 out of 20 DDSs), and help in identifying gaps in examiner skills (15 out of 21 DDSs). In interviews, some DDS directors specifically cited surveys of examiners’ issues and concerns as an area with which they wanted assistance. They explained that such surveys could be used to identify and share DDS best practices in managing staff, including how different DDSs manage examiner caseloads and train examiners. One director noted that information on DDS best practices in human capital management is not currently available and that only SSA can “survey the landscape nationally.” Moreover, a former DDS director explained that directors view nationwide surveys as a means for communicating to SSA their human capital challenges. We also asked DDS directors about the effectiveness of various types of human capital assistance that they did receive from SSA and its regional offices, including assistance in negotiating human capital changes with state governments. We found that more than half of the DDS directors who received assistance said that such assistance was of limited effectiveness in each of the following areas: helping project trends in the nature of the disability workload (24 out of 34 assisting in negotiating easing of state restrictions (e.g., on hiring and travel) with the state government (19 out of 24 DDSs); providing guidance on roles and responsibilities for examiners with enhanced responsibilities (18 out of 26 DDSs); helping to design training and developing training materials for examiners with enhanced responsibilities and the staff who will be supporting them (16 out of 22 DDSs); assisting in allowing DDSs to reduce the total caseload level for examiners taking on enhanced responsibilities (13 out of 24 DDSs); helping in assessing readiness for transition to an examiner role with enhanced responsibilities (12 out of 14 DDSs); helping with workforce planning, including projecting separations and developing succession plans (11 out of 13 DDSs); and providing help in negotiating increases in examiner salaries with the state government (11 out of 16 DDSs). Regional office officials and DDS directors explained in interviews that the effectiveness of SSA and its regional offices in helping the DDSs negotiate human capital changes with the states can be limited by such factors as state budget problems, political concerns, and personnel rules. For example, some officials said in interviews that state budget crises had created political pressure to limit or prevent increases in state employee salaries. Other DDS directors told us that state officials were concerned that raising examiner salaries would prompt increases in the salaries of other state employees, such as employees within the same job classification. In addition, although 19 DDS directors reported in our survey that DDS salary levels are open to negotiation with unions, some regional office officials said in interviews that obtaining salary increases for disability examiners apart from other state employees covered by union contracts could be difficult. In light of such difficulties in negotiating human capital changes with the states, one key regional office official we spoke to said that “all the regional office can do is cajole” the state governments about DDS human capital issues, since under the regulations the authority in this arena generally remains with the states. Similarly, another top regional official cautioned that, while the regional office tries to help the DDSs address the human capital challenges they face, it is difficult to do so. This official stated that the federal-state relationship is “unwieldy,” explaining that it is easier for state governments to apply state human capital policies—such as hiring freezes—to all state personnel than to make exceptions for DDS employees, despite SSA’s full reimbursement of DDS expenses. The official said that, because the regional office must continually educate and explain to each newly elected state governor’s administration that the DDS is federally funded, the regional office is seeking ways to make such education more effective and less labor-intensive. Indeed, current and former DDS directors we spoke with said that outreach from SSA to state governors through such national groups as the National Governors Association (NGA) is needed to foster an appreciation of the importance of a highly qualified DDS workforce to improving service to disability claimants. SSA has not used the statutory authority it has to set standards for the DDS workforce. Although amendments to the Social Security Act in 1980 granted SSA the authority to issue regulations to ensure effective and uniform administration of the national disability programs, SSA has not used this authority to address wide variations in staff salaries, entry-level qualification requirements, and training for different DDSs. The Social Security Advisory Board, in 2001, called these variations potential contributors to inconsistencies in SSA’s disability decisions. Emphasizing that the disability programs are national in scope and that equal treatment for all claimants wherever they reside is essential, the Advisory Board recommended that SSA revise its regulations to establish guidelines for salaries, entry-level qualification requirements, training, and other factors affecting the ability of DDS staff to make quality and timely decisions. SSA has not acted on the Advisory Board’s recommendations, however. While SSA officials acknowledged in interviews that the agency has the authority to establish uniform minimum human capital standards, they told us that the agency has chosen not to exercise this authority because of concerns about the difficulties such actions could raise in terms of the federal-state relationship. For example, they explained that requiring uniform human capital standards might be perceived by some states as unwelcome federal interference in state operations and could raise the prospect of states withdrawing their participation in making disability determinations for the disability programs. Indeed, in a prior report, we noted that, in the late 1970s, SSA could get only 21 of the 54 DDSs to revise their operating agreements with SSA, partly because the states regarded the revisions as infringements on their traditional prerogatives. The revised agreements required DDSs to comply with guidelines issued by SSA with regard to personnel matters, among other administrative requirements. Many DDS and SSA officials we spoke with acknowledged the difficulties that would be involved with implementing uniform standards for DDS personnel. Nevertheless, the National Council of Disability Determination Directors and several DDS and SSA officials we interviewed (including some top regional office officials) expressed the view that uniform standards for DDS employees could help address the human capital challenges confronting the DDSs. Some referred to the vocational rehabilitation program administered by the Department of Education’s Rehabilitation Services Administration in partnership with the states as an example of a federal-state program that has set qualification standards for state employees. DDS disability examiners are essential to SSA’s meeting its strategic goal for better serving disability claimants by making the right decision in the disability process as early as possible. Yet SSA has not developed a nationwide strategic workforce plan to address the very personnel who will be crucial to meeting that goal. The immediate challenges that DDS directors face today in maintaining and improving the examiner workforce are unlikely to lessen with time and will likely have even more severe consequences as the DDSs confront increasing numbers of applicants for disability benefits. The critical task of making disability decisions is complex, requiring strong analytical skills and considerable expertise, and it will become even more demanding with the implementation of the Commissioner’s new long-term improvement strategy and the projected growth in workload. Moreover, because SSA has not set uniform minimum qualifications for examiners, some DDSs may find it difficult to justify an appropriate job classification and level of compensation needed to recruit and retain these critical employees. Without a plan to develop and maintain a skilled workforce—as well as measures to establish uniform minimum qualifications for examiners, close critical skill gaps, and improve training—SSA’s ability to provide high-quality service to disability claimants could be further weakened by gaps in critical competency areas and the loss of experienced DDS examiners due to high turnover. As vacancies are filled by new hires and trainees who need one to two years to become fully productive, the DDSs will likely have difficulty maintaining skill levels and successfully coping with expected high growth in workloads. The combination of decreased overall skill levels and increased workload could make the work environment even more stressful, further increasing turnover. This spiraling effect, if not addressed, could undermine the agency’s efforts to ensure that disability decisions are made accurately, consistently, and in a timely manner. A strategic workforce plan is even more critical to the Commissioner’s long-term strategy for improving the disability claims process and her ability to bring SSA’s approach to disability decision-making in line with the current state of science, medicine, technology, and labor market conditions. Failure to look ahead and plan to ensure that the appropriate mix of skills and capabilities are available when and where needed could obstruct SSA’s progress as it seeks to fundamentally restructure its disability programs to improve the accuracy and timeliness of decisions and focus on identifying and enhancing claimants’ productive capacities. Given such a profound transition in an environment of constrained resources, SSA must be able to plan effectively if it is to anticipate how its requirements for DDS staff will change and be convincing about the need for increased human capital investments. It will not be simple to implement a nationwide strategic workforce plan for a program that is administered in partnership with the states. Negotiating changes in state human capital policies, such as restrictive job classifications or hiring limitations, will be difficult. Improving the content and delivery of SSA-provided training and closing gaps in examiner skills across the DDSs will be challenging and potentially costly. Establishing uniform minimum qualifications for examiners throughout the DDSs will also be a difficult task, requiring delicate and time-consuming discussions with some state governments. However, despite the acknowledged difficulties, SSA cannot afford to forgo developing an overarching, guiding framework to use as a basis for making short- and long-term human capital decisions for the DDSs. As an agency with fiduciary responsibility for administering multibillion dollar disability programs that are nationwide in scope, SSA has an obligation to take a leadership role in planning— together with its state partners—to address both the immediate and future workforce needs in the DDSs. We recommend that SSA take the following actions: 1. Develop a nationwide strategic workforce plan that addresses present and future human capital challenges in the DDSs. This plan should enable SSA to identify the key actions needed to deal with immediate DDS problems with recruiting and hiring, training, retention, and succession planning in support of SSA’s strategic plan. It should additionally enable SSA to anticipate and plan for the future workforce that will be needed as SSA modernizes and fundamentally transforms its approach to disability decision-making. To develop and implement this comprehensive workforce plan, SSA should work in partnership with the DDSs and their parent agencies. As part of the planning process, SSA should: a. Identify a small number of key DDS indicators of human capital performance, including recruiting and hiring measures, level of stress in the workplace, training needs, and turnover. SSA should establish standards for acceptable performance on these indicators, routinely collect and analyze the data to identify trends, and use this information to guide decisions regarding future DDS workforce needs and the strategies to meet them. b. Provide necessary tools and technical assistance to the DDSs to enable them to conduct long-term workforce planning. SSA should ensure that SSA staff responsible for providing this assistance are well trained in the tenets of workforce planning. c. Require each DDS to develop its own long-term workforce plan that is linked to the nationwide long-term DDS workforce plan. SSA should work in partnership with the DDSs and their parent agencies to develop these plans. d. Establish an ongoing program of outreach from SSA’s leadership to state governors and national associations of state government officials to discuss the benefits and challenges of the federal-state relationship and encourage them to address human capital challenges identified by DDS directors, such as salary limits and hiring freezes. e. Link performance expectations of appropriate SSA executives to their efforts in accomplishing goals and objectives of the workforce plan. 2. Issue regulations that establish uniform minimum qualifications for new disability examiners. The minimum qualifications should be based on an analysis of the position that identifies the examiner’s responsibilities and the minimum knowledge, skills, and competencies necessary to adequately perform them. The minimum qualifications for the examiner’s position should take into account any changes in the complexity of the tasks required for this position stemming from the Commissioner’s new long-term strategy. 3. Work with DDSs to close the gaps between current examiner skills and required job skills. To do so, SSA should work with the DDSs to: a. analyze examiner training needs, using as a foundation the analysis of job responsibilities and related minimum knowledge, skills, and competencies recommended above; b. improve training content and delivery to meet these needs, basing such efforts on analyses of training content and appropriateness of training delivery methods; and c. develop performance measures to track effectiveness of these improvements to training. We provided a draft of this report to SSA for comment. SSA generally agreed with the intent of the recommendations in the report but stated that the report does not fairly address or adequately discuss the many sides of the DDS human capital management issues. In particular, SSA criticized some of our study’s methods and expressed concern that we did not sufficiently acknowledge the difficulties involved in making changes to the federal-state relationship. We continue to believe, however, that the report presents a fair and balanced portrayal of the multifaceted issue of human capital management in the DDSs. Generally agreeing with the intent of our recommendations, SSA said it would consider incorporating a nationwide strategic workforce plan for the DDSs into its current strategy to improve the disability determination process. To do so will be essential, since the Government Performance and Results Act now requires agencies to report annually, as we noted in our report, on human capital strategies needed to meet their strategic goals. Regarding our recommendation on improving training, SSA said that it would continue ongoing efforts to improve examiner job skills. Results from our survey of DDS directors, however, revealed gaps in critical examiner knowledge and skills and a large proportion of DDS directors who would be able to spend fewer resources on adapting SSA’s training if SSA were to improve the quality, completeness, and timeliness of its training. Given such results, our report recommended that SSA go beyond its current efforts and base its training improvement initiatives on a systematic assessment of the examiner’s job responsibilities and related knowledge, skills, and competencies. In terms of our recommendation on outreach, SSA said that it is already conducting an outreach program to state officials and that it intends to engage in discussions with the NGA on DDS issues. While we noted efforts on the part of SSA’s regional offices to negotiate human capital changes with state governments, we maintain that SSA’s outreach program requires the sustained attention of SSA’s leadership at the national level. SSA’s expressed intent to pursue such discussions with the NGA is therefore a step in the right direction. SSA criticized some of our study methods, saying that we relied heavily on opinions of DDS directors and used rather leading and ambiguous survey questions. In terms of survey design, we surveyed DDS directors because their first-hand experiences make them some of the most knowledgeable respondents about human capital challenges experienced in their organizations. In addition, our survey was developed in accordance with GAO’s guidance on survey design and development, including extensive pretesting with current and former DDS directors to identify potential question bias and to clarify wording. We also gave SSA disability program officials, on two occasions, the opportunity to review and comment on the survey. Following the second review, the SSA official coordinating the review said that, while some of the questions might be difficult for the DDS directors to answer, we should go ahead with the survey as it stood. The official did not refer to any bias in the survey questions. SSA also was concerned that we administered the survey at a time of budget constraint that SSA said influenced some of the directors’ responses. Our survey, however, reflects ongoing challenges facing the DDSs and was not limited to the particular circumstances of 2003. Further, our study findings did not rest solely on the opinions expressed in our survey of DDS directors. In addition to the survey, we gathered information through interviews with several other sources as well, including officials at two DDSs, three SSA regional offices, and SSA headquarters; officials of the National Council of Disability Determination Directors and the National Association of Disability Examiners; and staff of the Social Security Advisory Board. We also reviewed pertinent laws, regulations, and procedures, and obtained and analyzed human capital data from several sources. SSA was also concerned that we did not sufficiently acknowledge the attitudes of the states toward modifying federal regulations to establish uniform human capital standards and the complexities involved in such regulatory changes, such as the problems that SSA says it would face if a large state declined to make disability determinations and transferred these responsibilities to the federal government. We acknowledged in our report the difficulties SSA has encountered in convincing the DDSs to comply with SSA guidelines on personnel issues, due in part to the states’ perceptions of infringements on traditional state responsibilities. We also stressed that establishing uniform minimum qualifications for examiners will be difficult, requiring delicate and time-consuming discussions with some state governments. But we maintain that, despite the difficulties, SSA is obligated to address the human capital challenges facing the DDSs. An outreach program involving SSA’s leadership and a close working partnership among SSA, the DDSs, and their state parent agencies will be vital to help ensure the success of SSA’s efforts. In addition, SSA expressed a number of other concerns about the draft report. These concerns, as well as our comments on them, are provided in full in appendix IV. Copies of this report are being sent to the Commissioner of SSA, appropriate congressional committees, and other interested parties. The report is also available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512-7215. Other contacts and staff acknowledgments are listed in appendix V. The following describes the methods we used to survey Disability Determination Service (DDS) offices as well as the methods we used to compare some of our survey data with data from other sources. We surveyed all state DDS directors as well as the DDS directors in the District of Columbia, Puerto Rico, and the federal DDS office. We did not survey directors in Guam and the South Carolina Office for the Blind because these offices each had only one disability examiner. We mailed surveys to 53 DDS directors and received responses from all of them. However, because most of the questions in our survey do not apply to the federal DDS, we reported results for 52 DDSs. Our survey included questions about long-term workforce planning, recruiting and hiring, compensation, training and development, and retention of disability examiners. The survey results in this report represent the views of the DDS directors and do not necessarily represent the views of examiners or other DDS staff or the views of Social Security Administration (SSA) officials. The practical difficulties of conducting any survey introduce various types of errors related to survey responses. For example, differences in how a particular question is interpreted and differences in the sources of information available to respondents can be sources of error. In addition, respondents might not be uniformly conscientious in expressing their views or they may be influenced by concerns about how their answers might be viewed by GAO, SSA, or the public. We included steps in both the data collection and analysis stages for the purpose of minimizing such errors. For example, to address differences in how questions were interpreted, we asked two members of the Social Security Advisory Board, as well as current and past officers of the National Council of Disability Determination Directors and the National Association of Disability Examiners, to review and critique the survey questions before pretesting. SSA disability program officials also reviewed our survey on two occasions. In addition, we pretested the survey with four former DDS directors and four current DDS directors. We modified the survey questions based on the results of these pretests. Because we conducted our survey while 20 DDSs were testing the feasibility of implementing an examiner position with enhanced responsibilities, we tailored a few of the survey questions to be relevant for those DDSs testing these enhanced positions as well as for those not testing such positions. We also tailored questions for California’s survey, which had separate offices testing and not testing the enhanced examiner position. In addition, we tailored questions for the survey that went to the federal DDS. To address possible director concerns about how their answers might be viewed, we stated in the introduction to the survey that their responses would be reported in summary form only, without being individually identified, and that their responses would not be released unless requested by a member of Congress (see appendix II for a copy of our survey). When we analyzed the data from our survey, where possible, we checked survey answers involving numbers and percentages to ensure they summed correctly. When we identified a discrepancy, we contacted the relevant DDS director to resolve the discrepancy. We wanted to determine how turnover rates (overall and for new hires) for DDS examiners compared with those for selected groups of federal employees. To do this, we compared the turnover rate of DDS examiners with that of Veterans Benefits Administration (VBA) examiners, SSA employees, and all federal employees. VBA examiners were selected because they perform duties similar to DDS examiners, such as developing claims using medical and disability program knowledge. We compared DDS examiner turnover rates with SSA turnover rates because SSA fully funds the DDSs to achieve its disability program mission. The federal employee turnover rate was selected as a general baseline. We used data from the Office of Personnel Management’s (OPM) Central Personnel Data File (CPDF) to calculate turnover rates for VBA examiners, SSA employees, and all federal employees. We counted how many permanent employees in each group left their position in each of fiscal years 2000, 2001, and 2002. For VBA examiners and SSA employees, transfers to other agencies were counted as separations. For all federal employees, only separations from federal service were counted as separations. To calculate overall turnover, we divided the number separated each year by the average of the number of staff (which we obtained by averaging the number of staff at the beginning of the fiscal year and the number of staff at the end of the fiscal year). We also calculated a new hire turnover rate. We defined a new hire separation as a separation of an employee hired in one fiscal year who left before the end of the following fiscal year (for example, hired in fiscal year 2000 and left before the end of fiscal year 2001). To determine the turnover rate for new hires, we counted all career and career conditional appointments for each fiscal year 2000 and 2001. We then determined how many of these separated before the end of the following fiscal year and divided this by the number of new hires in the prior fiscal year. We also calculated turnover rates for DDS examiners using the same formulas. We also compared DDS examiner salaries with VBA examiner salaries. We analyzed data from OPM’s CPDF to calculate the average base salary, including locality adjustments, for VBA examiners state by state. We divided each DDS’s average examiner salary by the average VBA examiner salary for each state, the District of Columbia, and Puerto Rico. This resulted in a measure of DDS average salary relative to average VBA examiner salaries for each location. When we analyzed salaries of examiners who left DDSs to accept higher salaries in federal agencies, directors of two DDSs provided information on both the salaries of these examiners while they were employed by the DDSs, and on the federal General Schedule (GS) grade levels for their new SSA positions. To determine SSA salaries, we used the 2002 federal government GS pay scale, including locality adjustments. For cases in which the directors provided us with two possible SSA grade levels, we used the first step of the lower grade in our analysis. Three of these disability examiners also served as quality assurance reviewers, hearing officers, or trainers while employed in their DDS. Positions accepted at SSA by the departing examiners included regional office disability quality branch analyst, regional office program specialist, and field office claims representative, as well as posts in the federal DDS. Social Security Disability: Reviews of Beneficiaries’ Disability Status Require Continued Attention to Achieve Timeliness and Cost- Effectiveness. GAO-03-662. Washington, D.C.: July 24, 2003. High-Risk Series: An Update. GAO-03-119. Washington, D.C.: January 1, 2003. SSA Disability: Other Programs May Provide Lessons for Improving Return-to-Work Efforts. GAO-01-153. Washington, D.C.: January 12, 2001. Social Security Disability Insurance: Multiple Factors Affect Beneficiaries’ Ability to Return to Work. GAO/HEHS-98-39. Washington, D.C.: January 12, 1998. SSA Disability: Return-to-Work Strategies from Other Systems May Improve Federal Programs. GAO/HEHS-96-133. Washington, D.C.: July 11, 1996. SSA Disability: Program Redesign Necessary to Encourage Return to Work. GAO/HEHS-96-62. Washington, D.C.: April 24, 1996. Human Capital: Opportunities to Improve Executive Agencies’ Hiring Processes. GAO-03-450. Washington, D.C.: May 30, 2003. Results-Oriented Cultures: Creating a Clear Linkage between Individual Performance and Organizational Success. GAO-03-488. Washington, D.C.: March 14, 2003. High-Risk Series: Strategic Human Capital Management. GAO-03-120. Washington, D.C.: January 2003. A Model of Strategic Human Capital Management. GAO-02-373SP. Washington, D.C.: March 15, 2002. Human Capital: A Self-Assessment Checklist for Agency Leaders. GAO/OCG-00-14G. Washington, D.C.: September 2000. Human Capital: Key Principles for Effective Strategic Workforce Planning. GAO-04-39. Washington, D.C.: December 11, 2003. Foreign Assistance: Strategic Workforce Planning Can Help USAID Address Current and Future Challenges. GAO-03-946. Washington, D.C.: August 22, 2003. Tax Administration: Workforce Planning Needs Further Development for IRS’s Taxpayer Education and Communication Unit. GAO-03-711. Washington, D.C.: May 30, 2003. Human Capital Management: FAA’s Reform Effort Requires a More Strategic Approach. GAO-03-156. Washington, D.C.: February 3, 2003. HUD Human Capital Management: Comprehensive Strategic Workforce Planning Needed. GAO-02-839. Washington, D.C.: July 24, 2002. NASA Management Challenges: Human Capital and Other Critical Areas Need to be Addressed. GAO-02-945T. Washington, D.C.: July 18, 2002. Air Traffic Control: FAA Needs to Better Prepare for Impending Wave of Controller Attrition. GAO-02-591. Washington, D.C.: June 14, 2002. Securities and Exchange Commission: Human Capital Challenges Require Management Attention. GAO-01-947. Washington, D.C.: September 17, 2001. Human Capital: Implementing an Effective Workforce Strategy Would Help EPA to Achieve its Strategic Goals. GAO-01-812. Washington, D.C.: July 31, 2001. Single Family Housing: Better Strategic Human Capital Management Needed at HUD’s Homeownership Centers. GAO-01-590. Washington, D.C.: July 26, 2001. Results-Oriented Cultures: Implementation Steps to Assist Mergers and Organizational Transformations. GAO-03-669. Washington, D.C.: July 2, 2003. Homeland Security: Management Challenges Facing Federal Leadership. GAO-03-260. Washington, D.C.: December 20, 2002. Highlights of a GAO Forum: Mergers and Transformation: Lessons Learned for a Department of Homeland Security and Other Federal Agencies. GAO-03-293SP. Washington, D.C.: November 14, 2002. Managing for Results: Using Strategic Human Capital Management to Drive Transformational Change. GAO-02-940T. Washington, D.C.: July 15, 2002. FBI Reorganization: Initial Steps Encouraging but Broad Transformation Needed. GAO-02-865T. Washington, D.C.: June 21, 2002. Human Capital: A Guide for Assessing Strategic Training and Development Efforts in the Federal Government. GAO-03-893G. Washington, D.C.: July 1, 2003. Foreign Languages: Human Capital Approach Needed to Correct Staffing and Proficiency Shortfalls. GAO-02-375. Washington, D.C.: January 31, 2002. Human Capital: Design, Implementation, and Evaluation of Training at Selected Agencies. GAO/T-GGD-00-131. Washington, D.C.: May 18, 2000. 1. We believe that the report presents a fair and balanced portrayal of the multifaceted issue of human capital management in the DDSs. We designed the survey to obtain DDS directors’ opinions about the extent to which, if any, a DDS had experienced certain human capital challenges and the likely factors and consequences involved. Moreover, the opinions were obtained from directors whose first-hand experiences make them some of the most knowledgeable sources of information about such issues in their organizations. But in addition to our survey, our overall study methods relied on information and data from several other sources as well. For example, we interviewed disability examiners and their managers at two DDSs, officials responsible for DDS management assistance at three of SSA’s regional offices, SSA officials at headquarters, officials of the National Council of Disability Determination Directors and the National Association of Disability Examiners, and staff of the Social Security Advisory Board. We also reviewed pertinent laws, regulations, and procedures, and obtained and analyzed human capital data from the DDSs, SSA, and other federal agencies. Our survey was developed in accordance with GAO's guidance on survey design and development. To avoid the potential for questions to be leading, on every question in which we asked for directors’ opinions, we gave them the opportunity to say that they did not experience that particular challenge, contributing factor, or consequence. To this end, we constructed the questions so that the first response choice was “no extent” or equivalent wording. In addition, each question was specifically assessed for possible bias or problematic wording during extensive survey pretesting. We pretested the survey eight times—with four former DDS directors and four current directors. On the basis of these pretests, we modified the questions until pretesters raised no further issues. We also gave SSA disability program officials the opportunity, on two occasions, to review and comment on the survey. SSA officials first reviewed the survey prior to its pretesting. Among other suggestions, they noted that some survey questions were leading in nature and that, in addition, we should develop scaled responses to provide respondents with the opportunity to modulate their answers (e.g., from “no extent” to “very great extent”). We modified the survey on the basis of their comments, including revising or eliminating questions that they thought were leading and constructing scaled responses as suggested. After additionally incorporating comments of several pretesters, we provided SSA with the chance to review a revised version of the survey. The official coordinating SSA’s second review e- mailed us in reply, saying that, while some of the survey questions might be difficult for the DDS directors to answer, we should go ahead with the survey as revised. The official did not refer to any bias in the revised questions. Our survey questions and our findings reflect ongoing human capital challenges facing the DDSs and were not limited to the particular circumstances of fiscal year 2003. The survey questions themselves were generally not limited to the most recent year, and several explicitly asked for data for the past two or three fiscal years or for the future. While the impact of the continuing resolution and the related SSA hiring freeze that was in place throughout much of fiscal year 2003 may have affected DDS directors’ responses, DDS and SSA officials have told us that resource constraints and budget uncertainties have been ongoing challenges for a number of years. Furthermore, certain aspects of the time period in which the survey was conducted likely downplayed some of the human capital challenges facing the DDSs. For example, DDS officials said in interviews that they expected examiner turnover to increase as economic conditions improved in the future. 2. Our report acknowledges the efforts made by SSA regional offices to persuade state governments to increase examiner salaries in light of their new responsibilities. Our report, however, does not assert that 24 DDSs were refused assistance with negotiating salary increases for examiners after they had requested it. Rather, we said that, of the DDS directors who reported wanting help from SSA with negotiating salary increases, more than half (24 DDSs) said they had not received this kind of help. (SSA interpreted wanting help and not receiving it as having requested help and been refused such assistance.) But regardless of whether directors have specifically requested this or another type of human capital assistance, they reported in their survey responses that they want active support from SSA on this and a number of other issues involving human capital management. 3. Our report acknowledges that some states have strategic workforce planning initiatives that consider their DDS employees. However, the issue relevant to our study was not whether statewide human capital management offices were generally effective, as SSA suggests, but whether there were any workforce planning efforts by SSA or the DDSs that were integral to and supportive of SSA’s mission and goals. As we noted in the report, even sophisticated statewide workforce planning efforts are not necessarily focused on ensuring that the DDSs have the workforces needed to accomplish such SSA goals as reducing claims-processing times. 4. Our report acknowledges SSA’s current efforts at outreach to state officials. For example, our report describes efforts on the part of regional office officials to persuade state governments to exempt examiners from state hiring restrictions, reclassify DDS examiner positions, and increase examiner salaries. We also emphasize that SSA and its regional offices can be limited in their ability to help the DDSs negotiate changes by such factors as state political and budget concerns, as well as state personnel rules. However, as noted in our report, we found no record to date of any discussions with the National Governors Association (NGA) or of NGA focusing on this topic. Our recommendation that SSA reach out to national associations such as the NGA is an acknowledgment that the DDSs and SSA’s regional offices cannot successfully confront these difficult human capital challenges without the sustained attention of SSA’s leadership at the national level. For clarity, we have emphasized this point in the text of our recommendation. SSA’s expressed intent to pursue discussions on a national level with NGA is a step in the right direction. 5. We recounted in our report the view of SSA officials that requiring uniform human capital standards might be perceived by some states as unwelcome federal interference and could raise the prospect of states withdrawing their participation in making disability determinations. We also noted the difficulties SSA has encountered in the past in convincing the DDSs to comply with SSA guidelines on personnel issues, due in part to the states’ perceptions of infringements on traditional prerogatives. Accordingly, we stressed in our report that establishing uniform minimum qualifications for examiners throughout the DDSs will be difficult, requiring delicate and time-consuming discussions with some state governments. However, establishing such qualifications will also be worthwhile, helping some DDSs justify an appropriate job classification and level of compensation needed to recruit and retain qualified disability examiners. As an agency with fiduciary responsibility for administering disability programs that are nationwide in scope, SSA has an obligation to do no less than take firm steps to address the human capital challenges facing the DDSs. We understand SSA’s concern about the difficulties it would face if states opted out of the disability program and transferred these responsibilities to the federal government. To help ensure the success of SSA’s efforts, outreach from SSA’s leadership to the state governors will be vital. Also essential will be a close working partnership among the immediate stakeholders—SSA, the DDSs, and their state parent agencies—in developing a nationwide strategic workforce plan. 6. We did not examine the accuracy and timeliness of claims processing. Nevertheless, even had these measures of performance improved, the Commissioner noted in her September 25, 2003, testimony that SSA still has “a long way to go” in its efforts to be more timely and accurate, despite positive strides in the short term. Moreover, SSA’s own published strategic plan for 2003 to 2008 warns that “the length of time it takes to process these claims is unacceptable.” Results from our survey of DDS directors demonstrate the need to address such DDS human capital issues as high turnover and recruiting and hiring difficulties in order to improve the timeliness and accuracy of claims processing. Of the directors (43) who reported experiencing difficulties in recruiting and hiring enough people who could become successful examiners, more than three-quarters said that such difficulties contributed to decreased accuracy in disability decisions or to increases in claims-processing times. Moreover, over one-half of all directors reported that turnover had increased claims-processing times. 7. Our report neither states nor assumes that higher salaries alone guarantee improved DDS performance. Rather, it states that, according to more than two-thirds of all DDS directors, noncompetitive pay was one of several factors contributing to examiner turnover. Moreover, our report emphasized the costly consequences of such turnover, noting that the estimated cost of examiner turnover in fiscal year 2002 was in the tens of millions of dollars. (Our estimates show that this would be the case, regardless of whether the calculation is based on total turnover or turnover that is above that of the federal government as a whole.) SSA itself has been attempting to persuade state governments to increase examiner salaries to reflect new job responsibilities. Although increased compensation may increase costs, the turnover that can result from not addressing human capital management concerns, such as not compensating employees appropriately, can be costly as well, as we note in the report. We agree with SSA that some attrition is desirable. But over half of all DDS directors told us in our survey that examiner turnover in their offices was too high, and we found that examiner turnover was about twice that of federal employees performing similar work. Because turnover is costly, we emphasize the importance of using data to identify current and future human capital needs. We have found in prior work that high-performing organizations analyze who is leaving, what skill gaps result, and how much turnover is desirable or acceptable. Organizations that fail to effectively manage their turnover risk not having the capacity to achieve their goals. A balance needs to be achieved between bringing in new employees with fresh and vibrant perspectives and retaining experienced employees whose institutional knowledge can maintain goals and help train others. 8. We cited the Department of Education’s experience to show that establishing federal qualifications requirements for state employees, as we recommended that SSA do, can and has been done. While we have not studied federal experiences with workforce planning in an intergovernmental arena, the GAO reports we provide in appendix III highlight an array of initiatives on the part of federal agencies to embrace workforce planning, including SSA’s planning models for its own employees. SSA has been willing to take the lead and develop models in workforce planning for its own employees. It should therefore build on its own internal expertise and lessons learned in this field to develop models of workforce planning in the demanding intergovernmental context as well. Lack of an existing model for the range of changes we recommend may make implementation more challenging, but it is not a convincing argument for inaction. 9. We support SSA’s leadership in its efforts to improve the disability determination process and to help people with disabilities remain in or return to the workforce. SSA said that it generally agreed with the intent of our recommendations and would consider incorporating a nationwide strategic workforce plan for the DDSs into its current strategy to improve disability determinations. To do so will be essential, since the Government Performance and Results Act now requires agencies to report annually, as we noted in our report, on human capital strategies needed to meet their strategic goals. While we did not provide an exhaustive treatment of states’ reactions to proposals for increased federal control, our report did note past opposition of some states to federal guidelines on personnel matters. In addition, we have added further detail in the report about the regulatory development process. We acknowledge the complexities involved in pursuing regulatory change. But despite these difficulties, we maintain that SSA has an obligation to address DDS workforce needs. 10. SSA said that it would continue ongoing efforts to improve examiner job skills. Results from our survey of DDS directors, however, revealed gaps in critical examiner knowledge and skills. Moreover, a large proportion of directors said they would be able to spend fewer resources on adapting SSA’s training if SSA were to improve the quality, completeness, and timeliness of this training. Given such results, our report recommended that SSA go beyond its current efforts and base its training improvement initiatives on a systematic assessment of the examiner’s job responsibilities and related knowledge, skills, and competencies. In addition to those named above, the following individuals made significant contributions to this report: Barbara Bordelon, Marissa Jones, Suit Chan, and Beverly Crawford, Education, Workforce, and Income Security Issues; Ellen Rubin, Strategic Issues; Gregory Wilmoth, Applied Research and Methods; and B. Behn Miller, General Counsel. | SSA oversees and fully funds primarily state-operated DDSs that determine whether applicants are eligible for disability benefits. The disability examiners employed by the DDSs play a key role in determining benefit eligibility. This report examines (1) the challenges the DDSs face today in retaining and recruiting examiners and enhancing their expertise; (2) the extent to which the DDSs engage in workforce planning and encounter obstacles in doing so; and (3) the extent to which SSA is addressing present and future human capital challenges in the DDSs. GAO found--through its survey of 52 of the 54 Disability Determination Service (DDS) directors and interviews with SSA officials and DDS staff--that the DDSs face three key challenges in retaining examiners and enhancing their expertise. High turnover: Over half of all DDS directors surveyed said that examiner turnover was too high in their offices. We found that examiner turnover was about twice that of federal employees performing similar work. Nearly twothirds of all directors reported that turnover has increased SSA's hiring and training costs and claims-processing times. And two-thirds of all directors cited stressful workloads and noncompetitive salaries as major factors that contributed to turnover. Recruiting and hiring difficulties: More than three-quarters of all DDS directors said they had difficulties over a three-year period in recruiting and hiring examiners. Of these, more than three-quarters said these difficulties contributed to increases in claims-processing times, examiner caseload levels, backlogs, and turnover. More than half of all directors reported that state-imposed compensation limits contributed to hiring difficulties. Gaps in key skills: Nearly one-half of all DDS directors said that at least a quarter of their examiners needed additional training in areas critical to disability decision-making. Over half of all directors cited factors related to high workload levels as obstacles to examiners' receiving additional training. Despite the workforce challenges facing them, a majority of DDSs do not conduct long-term, comprehensive workforce planning. In prior reports, GAO found that such planning should include key strategies for recruiting, retaining, training, and otherwise developing a workforce capable of meeting long-term agency goals. However, of the DDSs that engage in longer-term workforce planning, a majority have plans that lack such key workforce planning strategies. Directors cited numerous obstacles to long-term workforce planning, such as lengthy state processes to approve DDS human capital changes. SSA's workforce efforts have not sufficiently addressed current and future DDS human capital challenges. Federal law requires agencies to include in their annual performance plans a description of the human capital strategies needed to meet their strategic goals. However, GAO's review of key SSA planning documents shows they do not include a strategic human capital plan that addresses current and future DDS human capital needs. Thus, SSA does not link its strategic objectives to a workforce plan that covers the very people who are essential to accomplishing those objectives. GAO also found that SSA has not provided human capital assistance in a consistent manner across the DDSs and that SSA's effectiveness in helping the DDSs negotiate human capital changes with the states can be limited by such factors as state budget problems and personnel rules. Finally, SSA has not used its authority to establish uniform human capital standards, such as minimum qualifications for examiners, which would address, on a nationwide basis, some of the DDS challenges. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Conditions today present some of the most difficult recruiting and retention challenges DOD has experienced in recent history. Since the September 11, 2001, terrorist attacks on the United States, DOD has launched three major military operations requiring significant numbers of military servicemembers: Operation Noble Eagle, which covers military operations related to homeland security; Operation Enduring Freedom, which includes ongoing military operations in Afghanistan and certain other countries; and Operation Iraqi Freedom, which includes ongoing military operations in Iraq. These military operations have greatly increased personnel tempo, especially in the Army, Marine Corps, Army National Guard, and Army Reserve, which have provided the bulk of the military servicemembers for operations in Iraq. Additionally, when Hurricanes Katrina and Rita hit the Gulf Coast in August and September 2005, respectively, resulting in possibly the largest natural disaster relief and recovery operations in U.S. history, DOD was called upon to provide extensive search and rescue, evacuation, and medical support. DOD relies on four active components—the Army, Navy, Marine Corps, and Air Force—and four reserve and two National Guard components—the Army National Guard, Army Reserve, Navy Reserve, Marine Corps Reserve, Air National Guard, and Air Force Reserve—to meet its mission. Each year, Congress authorizes an annual year-end authorized personnel level for each component. In fiscal year 2005, the authorized personnel levels for the four active, four reserve, and two National Guard components totaled approximately 2.3 million military servicemembers. In order to meet legislatively mandated authorized personnel levels, DOD must balance accessions and losses. Meeting this authorization requirement is a function of recruiting and retention. The term recruiting refers to the military components’ ability to bring new members into the military to carry out mission-essential tasks in the near term and to begin creating a sufficient pool of entry-level servicemembers to develop into future midlevel and upper-level military leaders. To accomplish this task, active, reserve, and National Guard components set goals for accessions, or new recruits, who will enter basic training each year, and strive to meet their annual goals through their recruiting programs; advertising; and, where needed, financial incentives. The term retention refers to the military services’ ability to retain servicemembers with the necessary skills and experience. Again, the components rely on financial incentives, where needed, to meet their retention goals. The components further delineate their force structure through occupational specialties. These occupational specialties, totaling about 1,500 across DOD in the active, reserve, and National Guard components, represent the jobs that are necessary for the components to meet their specific missions. These occupational specialties cover a variety of jobs, such as infantrymen, dental technicians, personnel clerks, journalists, and air traffic controllers. While DOD’s active, reserve, and National Guard components met most of their aggregate recruiting and retention goals in the past 6 fiscal years, they faced greater recruiting difficulties in fiscal year 2005. With respect to recruiting, most components met overall goals from fiscal years 2000 through 2004, but 5 of the 10 components experienced recruiting difficulties in fiscal year 2005. Most components also met their aggregate retention goals in the past 6 fiscal years, but the Navy experienced retention shortages in fiscal year 2005. We note, however, that the components have taken certain actions that suggest they may be challenged to meet future recruiting and retention goals. The active, reserve, and National Guard components generally met their aggregate recruiting goals for enlisted servicemembers for the past 6 fiscal years, but 5 of the 10 components experienced recruiting difficulties in 2005. DOD’s recruiting data presented in table 1 show that with the exception of the Army, all of the active duty components met their overall recruiting goals. By the end of fiscal year 2005, the Army achieved about 92 percent of its recruiting goal. Table 2 shows that, while DOD’s reserve and National Guard components generally met or exceeded their enlisted aggregate recruiting goals for fiscal years 2000 through 2004, four components missed their recruiting goals in 1 or 2 years during that period by 6 to 20 percent. For example, the Army National Guard achieved 82 percent and 87 percent of its recruiting objectives in fiscal years 2003 and 2004, respectively, and the Air National Guard achieved 87 percent and 94 percent of its recruiting objectives in fiscal years 2001 and 2004, respectively. Additionally, in fiscal year 2000, both the Navy Reserve and Air Force Reserve missed their recruiting goals by 19 percent and 20 percent, respectively. Reserve and National Guard recruiting data through the end of fiscal year 2005 show that the reserve and National Guard components experienced difficulties in meeting their 2005 aggregate recruiting goals. Only the Marine Corps Reserve, achieving 102 percent, and the Air Force Reserve, achieving 113 percent, surpassed their goals. The Army Reserve achieved 84 percent of its overall recruiting goal; the Army National Guard, 80 percent; the Air National Guard, 86 percent; and the Navy Reserve, 88 percent. Appendix II contains more detailed information on components’ recruiting goals and achievements. As with recruiting, the active, reserve, and National Guard components generally met or surpassed their enlisted aggregate retention goals during the past 6 fiscal years, but in fiscal year 2005, 1 of the 10 components experienced retention shortages. DOD’s active components track retention by years of service and first, second, or subsequent enlistments. Table 3 shows that from fiscal years 2000 through 2004, the Army met all of its retention goals and that the 3 remaining active components missed some of their retention goals in 1 or 2 years during that period by at most 8 percent. For example, the Air Force did not meet its goal for second term reenlistments by up to 8 percent during this period. In fact, the Air Force did not meet this goal in 4 of the past 6 fiscal years and missed its goal for career third-term reenlistments in 2000, 2001, and 2002. The Navy did not meet its goal for reenlistments among enlisted servicemembers who have served from 10 to 14 years in 3 of the past 5 fiscal years, and the Marine Corps did not meet its goal for subsequent reenlistments in fiscal year 2003 only. In fiscal year 2005, retention data again show most active components generally met their enlisted aggregate retention goals, with the exception of the Navy. The Navy did not meet its end-of-year retention goals for servicemembers with less than 6 years of service by about 2 percent, and for servicemembers with 6 to 10 years of service by about 8 percent. Appendix III contains more detailed information on active components’ retention goals and achievements. DOD tracks reserve and National Guard components’ retention through attrition rates, a measure of the ratio of people who leave those components in a given year compared to the components’ total authorized personnel levels. These components establish attrition goals, which represent the maximum percentage of the force that they can lose each year and still meet authorized personnel levels. Annual attrition ceilings were originally established in fiscal year 2000 and have remained unchanged for each of the components. Table 4 shows that the reserve and National Guard components were generally at or below their attrition thresholds, thereby meeting their retention goals. Only three components, the Army National Guard, Army Reserve, and Air National Guard exceeded their attrition thresholds during fiscal years 2000 through 2003, and these thresholds were exceeded by about 1 percentage point or less. Several factors suggest the components are likely to continue experiencing difficulties in meeting their aggregate recruiting and retention goals in the future. DOD previously reported that over half of today’s youth between the ages of 16 and 21 are not qualified to serve in the military because they fail to meet the military’s entry standards. In addition, all active components are experiencing reduced numbers of applicants in their delayed entry programs. Also, each of the components initiated a stop loss program at various times in the past 6 fiscal years that prevented servicemembers from leaving active duty after they completed their obligations, although the Army, Army National Guard, and Army Reserve are the only components still employing stop loss. Furthermore, the Army Reserve has recalled members of the Individual Ready Reserve. All of the active components are experiencing shrinking numbers of new recruits in their delayed entry programs. According to a DOD official from the Office of the Secretary of Defense, the delayed entry program is viewed as a depository for future soldiers, sailors, and airmen. When prospects sign their contracts, they enter into the pool of applicants awaiting the date that they report to basic training. While in the delayed entry program, the applicants are the responsibility of the recruiter and, as such, are taught basic military protocols and procedures, such as saluting and rank recognition. Applicants generally spend no more than a year in the delayed entry program and no less than 10 days. Before being released from the delayed entry program, applicants are medically qualified, take an initial oath of enlistment, and perform other personal business prior to reporting to basic training. According to the same DOD official, a healthy delayed entry program is imperative to a successful recruiting year. If the active components come in at or near their delayed entry program goals for the year, they can be relatively sure they will achieve their annual recruiting goals. If they fall short of their delayed entry program goals, the components try to make up the shortfalls by sending individuals to basic training as early as the same month in which they sign enlistment papers. Typically, the active components prefer to enter a fiscal year with 35 percent to 65 percent of their coming year’s accession goals, depending on the component, already filled by recruits in the delayed entry program. Figure 1 shows the percentage of new recruits in the delayed entry programs compared to the components’ goals for fiscal years 2003 through 2006. For example, the Army’s goal is to enter each fiscal year with 35 percent of its upcoming annual accession goal already in the delayed entry program. The Army exceeded this goal when it entered fiscal year 2004 with 46 percent of its annual accession goal already in the program. However, it entered fiscal year 2005 with only 25 percent of its accession goal in the program and, as of August 2005, is projected to enter fiscal year 2006 with 4 percent of its accession goal in the program. The Navy, with an annual goal of 65 percent for its delayed entry program, is projected to enter fiscal year 2006 with almost 36 percent of its accession goal in the program. The Marine Corps, with an annual delayed entry program goal of 43 percent, entered fiscal year 2004 well above this goal, with 71 percent of its accession goal in the program, 54 percent in 2005, and a projected 26 percent in fiscal year 2006. Similarly, the Air Force, with a delayed entry program goal of 43 percent, projected to enter fiscal year 2006 with almost 22 percent of its accession goal in the program. The fact that all the active duty components entered fiscal year 2006 with at least 40 percent fewer recruits in their delayed entry programs than desired suggests that the active duty components will likely face recruiting difficulties in fiscal year 2006. Although some components have employed stop loss over the past several years, the Army, Army Reserve, and Army National Guard are the only components currently using it. The stop loss program, according to several Army officials, is used primarily for units that are deployed and is intended to maintain unit cohesion. A DOD official told us that in June 2005, the Army stop loss program affected over 15,000 soldiers, or less than one- tenth of a percent of DOD’s total military force—9,044 active component soldiers, 3,762 reserve soldiers, and 2,480 National Guard soldiers. The active Army and Army Reserve stop loss program takes effect 90 days prior to unit deployment or with official deployment order notification, and remains in effect through the date of redeployment to permanent duty stations, plus a maximum of 90 days. Army headquarters’ officials said that several Army initiatives, such as restructuring and rebalancing the active and reserve component mix, will, over time, eliminate the need for stop loss. Congress has expressed concern that the use of stop loss to meet overall personnel requirements may have a negative impact on recruiting and retention and the public’s perception of the military. Another Army official told us that the attention this program has received in the media may have created some of the negative implications on the public’s perception of military service. Additionally, an Army Reserve official we spoke with stated that when active duty units are affected by stop loss, servicemembers who have not completed their military obligations in those units may be delayed in transitioning from active duty into the reserve or National Guard components, potentially creating more difficulties for these components to meet their recruiting and retention goals. The Army Reserve has also recalled members of the Individual Ready Reserve—which, according to an Army Reserve official, is used to fill personnel shortfalls in active, reserve, and National Guard units—to address recruiting and retention difficulties. All soldiers, whether in an active, reserve, or National Guard component, agree to at least an 8-year service commitment in their initial enlistment contracts. This obligation exists regardless of how much time is to be served in the active, reserve, or National Guard component, or some combination of the active and reserve or National Guard components under the enlistment contract. If the soldier is separated from the active duty Army, reserve, or National Guard component before the 8-year commitment has been completed, the soldier may elect to remain in active duty Army, affiliate with the reserves, or be assigned to the Individual Ready Reserve. In the latter case, servicemembers are subject to recall, if needed. Almost 7,000 soldiers, including about 600 servicemembers who volunteered to serve and about 6,400 others under contractual obligation, have been deployed from the Individual Ready Reserve since 2002. As of August 2005, over 4,000 of these soldiers were still on active duty, serving a maximum of 2 years. All components exceeded authorized personnel levels for some occupational specialties and did not meet others. Specifically, we found certain occupational specialties have been consistently over- or underfilled when compared to their actual personnel authorizations. We believe these consistently overfilled and underfilled occupational specialties raise critical questions. First, what is the cost to the taxpayer to retain thousands more personnel than necessary in consistently overfilled occupational specialties? Second, how can DOD components continue to effectively execute their mission with consistently underfilled occupational specialties? However, because DOD lacks information from the components on all over- and underfilled occupational specialties, including reasons why these occupational specialties are over- and underfilled, it cannot address these questions and develop a plan to assist the components in addressing the root causes of its recruiting and retention challenges. Of nearly 1,500 enlisted occupational specialties across DOD, about 19 percent were consistently overfilled and about 41 percent were consistently underfilled from fiscal years 2000 through 2005, as shown in figure 2. In fiscal year 2005 alone, within the occupational specialties that consistently exceeded authorization, there were almost 31,000 more servicemembers in these occupational specialties than authorized. At the same time, DOD was not able to fill over 112,000 positions in consistently underfilled occupational specialties. The percentage of consistently over- and underfilled occupational specialties varies across the components. For example, from fiscal years 2000 through 2005, the percentage of consistently overfilled occupational specialties ranged from 1 percent in the active Navy to 44 percent in the Navy Reserve. Similarly, from fiscal years 2000 through 2005, the percentage of consistently underfilled occupational specialties ranged from 16 percent in the active Army to 65 percent in the active Navy. Table 5 provides information on consistently over- and underfilled occupational specialties for each component. Appendix IV provides more detailed information on the occupational specialties that were consistently over- and underfilled for each component. Our analysis further shows the number of servicemembers that exceeded the authorizations assigned to occupational specialties ranged from just 1 servicemember to almost 6,000 servicemembers. For example, the active Army Reserve Technical Engineer occupational specialty was overfilled by 1 servicemember in fiscal year 2000, and the active Navy Seaman occupational specialty was overfilled by almost 6,000 servicemembers in that same year. Similarly, we found shortages in the number of servicemembers assigned to occupational specialties ranged from just 1 servicemember to over 8,200 servicemembers. For example, the Marine Corps Reserve Parachute Rigger occupational specialty was underfilled by 1 servicemember in fiscal year 2000 and the active Navy Hospital Corpsman occupational specialty was underfilled by over 8,200 servicemembers in that same year. Table 6 presents several reasons provided by component officials to explain why certain occupational specialties have been consistently overfilled. For example, an Air Force Reserve official told us that this component recruited more Tactical Aircraft Avionics Systems personnel than authorized because it is a high-demand, technical occupational specialty that is critical to one of the Air Force Reserve’s missions. As a result, this occupational specialty has been consistently overfilled by about 160 to 240 individuals each year for the past 6 years. Furthermore, we found that the Army’s Cavalry Scout occupational specialty was overfilled by over 200 to almost 1,000 individuals in the past 5 years and an Army official stated that this occupational specialty was anticipated to increase its personnel authorization levels. According to Army projections, the current strength is still 1,700 short of the fiscal year 2007 target. Moreover, several component officials told us that some of their occupational specialties have consistently been overfilled because their components needed to meet legislatively mandated aggregate personnel levels, and to do so, they assigned personnel to occupational specialties that did not necessarily need additional personnel. Table 6 also presents several reasons provided by component officials to explain why certain occupational specialties have been consistently underfilled. For example, component officials told us that extensive training requirements have led the Army’s Special Forces Medical Sergeant, the Army National Guard’s Power-Generation Equipment Repairer, and the Marine Corps’s Counterintelligence Specialist occupational specialties to be consistently underfilled for the last 5 or 6 years. Furthermore, an official in the Army National Guard stated that the occupational specialty, Motor Transport Operator, was consistently underfilled by at least 1,800 to about 4,800 individuals in the past 6 years because the entire nation is short of truck drivers, which poses a recruiting challenge. We believe that consistently over- and underfilled occupational specialties are a systemic problem for DOD that raises questions about the validity of occupational specialty authorizations. The fact that DOD has consistently experienced almost 280 overfilled occupational specialties raises particular questions about affordability. We determined that it cost the federal government about $103,000 annually, on average, to compensate each enlisted active duty servicemember in fiscal year 2004. Accordingly, compensating the almost 31,000 servicemembers who served in occupational specialties that exceeded authorized personnel levels for fiscal year 2005 was costly. Similarly, the consistently underfilled occupational specialties raise the question about the components’ ability to continue to achieve their mission. The fact that over 112,000 positions in consistently underfilled occupational specialties were vacant in fiscal year 2005 raises concerns about whether the authorized personnel levels for these occupational specialties are based on valid requirements. While the active components have started reporting information to DOD on certain occupational specialties, the department currently lacks information on all occupational specialties, which prevents a complete understanding of the components’ recruiting and retention challenges. The Office of the Under Secretary of Defense (OUSD) for Personnel and Readiness directed the active components to report their critical occupational specialties for recruiting, beginning in 2004, in OUSD’s personnel and readiness report—an update provided to the Secretary of Defense each quarter. Table 7 provides the OUSD-defined criteria for occupational specialties that are critical for recruiting. An occupational specialty must meet at least one criterion to be considered critical; the fact that an occupational specialty is underfilled is only one of the criteria. Accordingly, the active components identify and report to OUSD about 10 percent of their occupational specialties that they deem critical for recruiting. For example, in OUSD’s third quarter fiscal year 2005 personnel and readiness report to the Secretary of Defense, the active components reported accessions information on 67 occupational specialties. In addition, the active components reported the reason why the occupational specialties reported were deemed critical, the accession goal for these occupational specialties, and year-to-date accessions achieved. Beginning in 2005, OUSD further directed the active components to report critical occupational specialties for retention as well. Table 7 also provides the OUSD-defined criteria for occupational specialties that are critical for retention. In fiscal year 2005, each of the active components reported 10 occupational specialties, for a total of 40 occupational specialties, which they deemed critical for retention. Again, the active components reported the reason why the occupational specialties reported were deemed critical; the retention goal; number of personnel retained to date; and the number of servicemembers authorized and assigned, or the fill rate, for each occupational specialty reported. These 40 occupational specialties, however, represent only 6 percent of the 625 total active duty occupational specialties. Collectively, the critical occupational specialties (such as Army Infantrymen and Marine Corps Counter Intelligence Specialists) that the active components report on represent at most 16 percent of the 625 active duty occupational specialties. Therefore, OUSD is not receiving information on at least 84 percent of active duty occupational specialties. Furthermore, the reserve and National Guard components are not required to report to OUSD any information on their combined 859 occupational specialties. This means that OUSD receives fill rate information on less than 3 percent of all occupational specialties. In July 2005, DOD issued Directive 1304.20 that requires the components to meet aggregate and occupational-specialty-specific authorized personnel levels. However, this directive does not include any reporting requirements and therefore does not specifically require the components to report on over- and underfilled occupational specialties. Until all components are required to provide complete information on all over- and underfilled occupational specialties, including reasons why these occupational specialties are over- and underfilled, and DOD determines if the new directive is having its desired effect, the department cannot develop an effective plan to assist the components in addressing the root causes of recruiting and retention challenges. Moreover, DOD is not in the position to assess the economic impact on the taxpayer of thousands of consistently overfilled occupational specialties or determine whether it can continue to effectively execute the mission requirements with consistently underfilled occupational specialties. DOD’s components spend hundreds of millions of dollars each year on programs to enhance their recruiting and retention efforts; however, the department lacks the information needed to determine whether financial incentives are targeted most effectively. Budgets for these programs— recruiting, advertising, and financial incentives—have fluctuated over the last 6 fiscal years. Specifically, in fiscal year 2005, five components increased their recruiter forces and three components revised their advertising programs. Additionally, various components increased enlistment and reenlistment bonuses to enhance recruiting and retention for specific occupational specialties. However, because OUSD does not require the components to fully justify the financial incentives paid to servicemembers in all occupational specialties, DOD lacks the information needed to ensure that funding spent on recruiting and retention is appropriately and effectively targeted to occupational specialties for which the components have the greatest need or to determine if other types of corrective action are needed. In fact, our analysis shows that components offered some of these incentives to servicemembers in consistently overfilled occupational specialties. Budget data for fiscal years 2000 through 2006 show that DOD’s recruiting programs, advertising, and financial incentive budgets fluctuated from $1.7 billion to $2.1 billion during those years, and that DOD spent a total of $9.9 billion on these programs over the first 5 years. Fiscal years 2005 and 2006 budget estimates for these programs total $1.7 billion and $1.8 billion, respectively. DOD’s overall annual expenditures for recruiting programs fluctuated from approximately $631.6 million to $800.7 million from fiscal years 2000 to 2004, and budgeted estimates of recruiting expenditures for fiscal year 2005 were $639.5 million, and for fiscal year 2006, $726.2 million. These recruiting expenditures cover essential items for recruiting commands and stations throughout the United States, including meals, lodging, and travel of applicants; recruiter expenses; vehicle operation and maintenance; office spaces; and other incidental expenses. Figure 3 shows actual recruiting expenditures for fiscal years 2000 through 2004 and budgeted recruiting expenditures for fiscal years 2005 and 2006. DOD’s annual advertising expenditures fluctuated from approximately $506.6 million to $663.0 million from fiscal years 2000 to 2004, and budgeted advertising expenditures for fiscal year 2005 were $570.7 million, and for fiscal year 2006, $543.9 million—a decrease of almost $27 million. Figure 4 shows actual advertising expenditures for fiscal years 2000 through 2004 and budgeted advertising expenditures for fiscal years 2005 and 2006. DOD’s annual expenditures for enlistment bonuses—targeted to new recruits—fluctuated from approximately $162.1 million to $301.2 million from fiscal years 2000 to 2004, and budgeted enlistment bonus expenditures for fiscal year 2005 were $149.3 million, and for fiscal year 2006, $175.0 million. Figure 5 shows actual enlistment bonus expenditures for fiscal years 2000 through 2004 and budgeted enlistment bonus expenditures for fiscal years 2005 and 2006. DOD’s total expenditures for selective reenlistment bonuses—targeted to servicemembers already in the military who reenlist for an additional number of years—fluctuated from approximately $420.5 million to $551.6 million from fiscal years 2000 to 2004, and budgeted reenlistment bonus expenditures for fiscal years 2005 and 2006 were $346.1 million and $387.7 million, respectively. Our analysis of actual expenditures for reenlistment bonuses in fiscal year 2005 for the active Navy, Marine Corps, and Air Force officials, show their expenditures to be within $13.5 million, $4.8 million, and $11.0 million of their budgeted amounts, respectively. However, it is significant to note that the active Army component spent approximately $426.0 million on reenlistment bonuses in fiscal year 2005, or almost eight times more than its budgeted amount of $54.3 million, to meet its retention goals. Figure 6 shows actual selective reenlistment bonus expenditures for fiscal years 2000 through 2004 and budgeted selective reenlistment bonus expenditures for fiscal years 2005 and 2006. While the actual number of recruiters fluctuated for all components from fiscal year 2000 through fiscal year 2005, five components—the Army, Army Reserve, Air Force, Air National Guard, and Army National Guard— increased their numbers of recruiters between fiscal years 2004 and 2005. Table 8 shows that the Army added almost 1,300 recruiters for a total of 6,262 recruiters, and the Army Reserve added over 450 recruiters for a total of 1,296 recruiters from September 2004 through June 2005. The Air Force and Air National Guard also increased their number of recruiters, from 1,480 to 1,487, and from 373 to 376, respectively. Of all the components, the Army National Guard has shown the greatest increase in its recruiter force—increasing the total number of recruiters from 2,702 in fiscal year 2004 to 4,448 in fiscal year 2005. In fiscal year 2005, the Army, Army Reserve, and Army National Guard made specific adjustments to their advertising programs. A primary focus of their recent advertising efforts has shifted to the “influencers,” those individuals who play a pivotal role in a potential recruit’s decision to join the military, including parents, teachers, coaches, other school officials, and extended family members. For example, the Army focused efforts on using its recruiting Web site as a vehicle to provide video testimonials of soldiers explaining, in their own words, what it means to be a soldier and why others should enlist. The Army and Army Reserve increased support to local recruiters through more public affairs efforts and by encouraging command support of their Special Recruiter Assistance Program. This program offers soldiers who have served in Operation Iraqi Freedom or Operation Enduring Freedom the opportunity to return to their hometowns and assist the local recruiters in gaining high school graduate leads and enlistments. Army officials stated that this program not only improves the number of contacts it makes, but it also provides interested individuals with a different perspective on operations overseas, which can assist in counteracting some of the negative information potential recruits receive from the media or influencers. The Army National Guard refocused its advertising by standardizing the appearance of its storefront recruiting offices to increase recognition and opening career centers in locations that provided it greater market exposure and access to the target populations. Additionally, the Army National Guard initiated a new “American Soldier” campaign that refines its message since September 11, 2001, and reflects the new realities of a prolonged recruiting and retention environment. DOD components also made adjustments to their financial incentives—the most costly of the three tools—to improve their ability to recruit and retain servicemembers. Over the last fiscal year, DOD made changes to existing financial incentives and introduced new financial incentives. For example, DOD expanded the pool of servicemembers who are eligible to receive a selective reenlistment bonus. Selective reenlistment bonuses are designed to provide a financial incentive for an adequate number of qualified mid- career enlisted members to reenlist in designated “critical” occupational specialties where retention levels are insufficient to sustain current or projected levels necessary for a component to accomplish its mission. The statutory authority for this bonus was amended in the Fiscal Year 2004 Authorization Act to allow the Secretary of Defense to waive the critical skill requirement for members who reenlist or extend an enlistment while serving in Afghanistan, Iraq, or Kuwait in support of Operations Enduring Freedom and Iraqi Freedom. In addition, in February 2005, DOD announced a new retention bonus for Special Operations Forces servicemembers (Army Special Forces, Navy SEALs, Air Force pararescue, plus a few other occupational specialties) who decide to remain in the military beyond 19 years of service. The largest bonus, $150,000, may be provided to eligible servicemembers who sign up for an additional 6 years of service. Eligible servicemembers who sign up for shorter extensions may qualify for smaller bonuses; servicemembers who extend for 1 additional year, for example, receive $8,000. Individual components also implemented changes over the last fiscal year. Specifically, to address recruiting, the active Army component implemented a minimum $5,000 bonus for all qualified recruits—generally based on graduation from high school and their scores on the Armed Forces Qualification Test—who enlist for 3 or more years in any military occupational specialty. Additionally, qualified recruits with bachelor’s degrees who enlist for 2 or more years in any occupational specialty may now receive bonuses up to $8,000; previously there were no bonuses for recruits with those educational qualifications. The Army also increased its maximum enlistment bonus amount, from $10,000 up to $14,000, for applicants who enlist for 3 or more years into certain occupational specialties, such as Cannon Crewmember, Cavalry Scout, and Crypto- Linguist Analyst. To address retention, the active Army, Navy, and Air Force components implemented the Critical Skills Retention Bonus program and the Assignment Incentive Pay program. The Critical Skills Retention Bonus program allows the components to target reenlistment bonuses at certain occupational specialties that have been identified as critical. The bonus is adjusted to meet current operational needs. For example, in fiscal year 2004, the Army offered the Critical Skills Retention Bonus to soldiers who were serving in special operations occupational specialties. In fiscal year 2005, the Army granted eligibility to 11 additional occupational specialties. These occupational specialties included recruiters, unmanned aerial vehicle operators, psychological operations specialists, and explosive ordnance disposal specialists. The Assignment Incentive Pay program, which provides up to $1,500 a month for enlisted servicemembers, was approved by the Under Secretary of Defense for implementation by the Navy, Air Force, and Army in early 2005. Assignment Incentive Pay is used to encourage servicemembers to volunteer for difficult-to-fill occupational specialties or assignments in less desirable locations. For example, for personnel in special operations occupational specialties to qualify for this pay, servicemembers must have more than 25 years of service, be designated by the Special Operations Command combatant commander as “operators,” and remain on active duty for an additional minimum of 12 months. The Army also implemented a Life Cycle Unit Bonus program designed to encourage servicemembers to commit to hard-to-fill occupational specialties in targeted units in fiscal year 2005. Army officials stated that this may help them address shortages of soldiers assigned to units at certain locations. When the program began, soldiers received up to $15,000 if they reenlisted and agreed to serve in certain units stationed at Fort Campbell, Kentucky; Fort Bliss and Fort Hood, Texas; and Fort Lewis, Washington. During fiscal year 2005, the Army Reserve and Army National Guard also increased some of their incentives to address recruiting and retention efforts. In fiscal year 2005, the Army allowed the reserve and National Guard components to increase their prior service enlistment bonus from $8,000 up to $15,000. Additionally, the bonus amount for a new recruit with no prior military experience increased from $8,000 to $10,000. Although we found that the components regularly offered financial incentives to servicemembers in consistently underfilled occupational specialties, we also found that each of the active duty components provided enlistment bonuses, selective reenlistment bonuses, or both to servicemembers in consistently overfilled occupational specialties. DOD only requires the components to provide general justifications for their financial incentives in their budget documents and does not require the components to specifically provide justifications for these incentives for noncritical occupational specialties, which make up at least 84 percent of all occupational specialties. Table 9 shows the number of consistently overfilled active component occupational specialties in which servicemembers received enlistment bonuses, selective reenlistment bonuses, or both, from fiscal years 2000 through 2005. The number of consistently overfilled occupational specialties in which servicemembers received enlistment bonuses, selective reenlistment bonuses, or both for 5 or 6 years is relatively low when compared to the total number of occupational specialties. However, the number of overfilled occupational specialties for which bonuses were authorized in a particular year can be considerably higher. Table 10 shows the number of all overfilled occupational specialties in which servicemembers received either an enlistment or selective reenlistment bonus for fiscal years 2000 through 2005. For example, in 2003, 278 out of all 625 active duty occupational specialties, or 44 percent, were overfilled, and servicemembers in these occupational specialties received enlistment or selective reenlistment bonuses. Component officials provided reasons why they offered enlistment and selective reenlistment bonuses to servicemembers in overfilled occupational specialties. Specifically, an Army official explained that they will target their bonuses to servicemembers at specific pay grades that are actually underfilled, even if the occupational specialty as a whole may be overfilled. Additionally, an official we spoke with stated that the requirement to meet legislatively mandated aggregate personnel levels results in, at times, offering bonuses to servicemembers who will serve in overfilled specialties, simply to meet their overall mandated personnel levels. Because enlistment and selective reenlistment bonuses generally range from a few thousand dollars up to $60,000, providing these bonuses to servicemembers in overfilled occupational specialties can be quite costly. While OUSD requires components to report incentives given to servicemembers in critical occupational specialties, it does not require the components to provide fully transparent rationales for incentives it provides to servicemembers in noncritical occupational specialties, which as we have stated, make up at least 84 percent of the components’ total occupational specialties. Some of these noncritical occupational specialties have been consistently overfilled in the past 6 fiscal years. By not requiring the components to fully justify their rationale for providing incentives to servicemembers in the consistently over- and underfilled occupational specialties, OUSD lacks the information needed to provide assurance to the Secretary of Defense and Congress that the amount of funding spent on recruiting and retention efforts is appropriately and effectively targeted to occupational specialties for which the components have the greatest need. Although DOD has reported that the components have generally met overall recruiting and retention goals for the past several years, meeting these goals in the aggregate can disguise the true challenges behind the components’ ability to recruit and retain servicemembers in the occupational specialities needed to fulfill their mission requirements. The fact that several occupational specialties have been consistently overfilled raises critical questions about affordability and whether the department is using its recruiting and retention resources most effectively. Similarly, DOD’s consistently underfilled occupational specialties raise concerns as to how the department can meet operational demands with what appear to be chronic shortages in certain occupational specialties. This latter issue is particularly relevant given the stresses on the force from prolonged operations in Iraq; the Global War on Terrorism; and most recently, significant disaster relief efforts in the Gulf Coast region. DOD is not in a position to develop a comprehensive recruiting and retention plan to address these and other issues because it lacks complete information from the 10 components on the occupational specialties that are over- or underfilled and the reasons why these conditions exist. Moreover, without this information, the department is not in a position to effectively communicate its true recruiting and retention challenges to Congress. Recently, Congress has provided increasing amounts of funding to assist DOD’s recruiting and retention efforts. While some components have used this funding to increase financial incentives to address both aggregate and occupational-specialty-specific recruiting and retention challenges, these increases in incentives are costly. Given the fiscally constrained environment we are facing now and in years to come, DOD can no longer afford to take a “business as usual” approach to managing its force. In some cases, DOD’s components have provided financial incentives to servicemembers in occupational specialties that are overfilled. While there may be valid reasons for providing these incentives to some servicemembers in these occupational specialities, DOD does not require the components to fully justify their decisions on financial incentives, which restricts the department’s ability to provide assurance to the Secretary of Defense, Congress, and the taxpayer that the increasing funding spent on recruiting and retention is appropriately and effectively targeted to occupational specialties for which components have the greatest need. To provide greater understanding of the recruiting and retention issues and improve the department’s oversight for these issues, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Personnel and Readiness, in concert with the Assistant Secretary of Defense for Reserve Affairs, to take the following two actions: Require the 10 components to report annually on all (not just critical) over- and underfilled occupational specialties, provide an analysis of why occupational specialties are over- and report annually on and justify their use of enlistment and reenlistment bonuses provided to servicemembers in occupational specialties that exceed their authorized personnel levels. Develop a management action plan that will help the components to identify and address the root causes of their recruiting and retention challenges. In written comments on a draft of this report, DOD partially concurred with our two recommendations. DOD’s comments are included in this report as appendix V. DOD partially concurred with our first recommendation to require the ten components to report annually to the Office of the Under Secretary of Defense for Personnel and Readiness on all over- and underfilled occupational specialties; provide an analysis of why specific occupational specialties are over- and underfilled; and report annually on and justify their use of enlistment and reenlistment bonuses provided to servicemembers in occupational specialties that exceed their authorized personnel levels. DOD stated that it already has visibility over occupational specialties deemed most critical for retention, which it captures through the Balanced Scorecard process. However, as we note in this report, the occupational specialties that the components report as critical for recruitment or retention account for only a small percentage of the total number of occupational specialties—at most only 16 percent of all occupational specialties are reported as critical. Therefore, the department does not have visibility over at least 84 percent of its occupational specialties. DOD asserted that our definition of over- and underfilled occupational specialties (those that were over- or under their authorized levels by one or more individuals) is unreasonably strict. However, as we note in this report, we established this definition because we found that DOD lacks common criteria that define thresholds for over- and underfilled occupational specialties. In prior work, we determined that it costs the federal government about $103,000 annually, on average, to compensate each enlisted active duty servicemember in fiscal year 2004; thus we believe each individual serving in an occupational specialty that is over the authorized personnel level represents a significant cost to the government. For example, we found about 8,400 active duty servicemembers serving in consistently overfilled occupational specialties in 2005. If we apply the 2004 average compensation amount to the 8,400 servicemembers, the additional cost to the taxpayer would be about $870 million. In addition, the taxpayer bears the unnecessary costs of supporting over 22,000 servicemembers in consistently overfilled occupational specialties in the reserve and National Guard components. DOD also provided in its response an example of an overfilled occupational specialty, and its rationale that this occupational specialty was overfilled as a function of ramping up to meet a future strength requirement. Our report acknowledges this and other reasons the components provided for over- and underfilled occupational specialties. We believe that this example underscores the need for our recommendation that the components provide an analysis of why occupational specialties are over- or underfilled. Without this type of analysis, OSD is not in a position to assess the extent to which all specialties are over- or underfilled, determine if the overfilled and underfilled positions are justified, and identify any needed corrective action. Also, DOD agreed with the need to closely manage financial incentive programs. However, we believe that DOD needs sufficient information from the components to determine if the reasons for offering bonuses to individuals in consistently overfilled occupational specialties are, indeed, justified. We continue to be concerned that all financial incentives may not be justified. Currently, the components are not required to provide fully transparent rationales for bonuses they provide to servicemembers in noncritical occupational specialties, which as we have stated earlier, make up at least 84 percent of the components total occupational specialties. Without greater transparency over the use of financial incentives, the department cannot truly know if funding spent on recruiting and retention efforts is appropriately and effectively targeted to occupational specialties for which the components have the greatest need. DOD partially concurred with our second recommendation to develop a management action plan that will help the components to identify and address the root causes of their recruiting and retention challenges. In response, DOD noted that its Enlisted Personnel Management Plan (EPMP), formally established by DOD Directive 1304.20 in July 2005, substantially achieves this recommendation. We believe DOD is moving in the right direction in addressing recruiting and retention challenges and, in this report, we acknowledge that DOD issued directive 1304.20 that requires the components to meet aggregate and occupational specialty specific authorized personnel levels. However, this plan cited in DOD’s comments did not exist at the time we conducted our audit work and we believe DOD needs to continue to move forward to establish this plan. DOD also stated that it has already laid the framework for this type of plan through monthly reviews of other reports and quarterly reviews of its Balanced Scorecard reports. DOD claimed these omissions in our draft report result in an incomplete description of management controls. We disagree since as stated earlier, the EPMP did not exist at the time we conducted our audit work and furthermore, we do acknowledge in this report that the department receives quarterly updates of critical occupational specialties through the Office of the Under Secretary of Defense’s personnel and readiness report. Nevertheless, we continue to believe that these updates do not provide OSD with full transparency and oversight since these quarterly updates do not include information on at least 84 percent of active duty occupational specialties or any of the combined 859 occupational specialties of the reserve and National Guard components. We are sending copies of this report to interested congressional members; the Secretaries of Defense, the Army, the Navy and the Air Force; the Commandant of the Marine Corps; and the Chiefs of the National Guard Bureau, the Army Reserve, the Army National Guard, the Air Force Reserve, the Air National Guard, the Navy Reserve, and the Marine Corps Reserve. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. Should you or your staff have any questions regarding this report, please contact me at (202) 512-5559 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix VI. To conduct this body of work, we examined Department of Defense (DOD) policies, regulations, and directives related to recruiting and retention of military servicemembers. We also reviewed recruiting and retention reports and briefings issued by GAO, DOD, the military services, the Congressional Research Service, the Congressional Budget Office, and research organizations such as RAND. Furthermore, we analyzed individual components’ databases containing recruiting and retention data on active, reserve, and National Guard servicemembers. In the course of our work, we contacted and visited the organizations and offices listed in table 11. To determine the extent to which the active duty, reserve, and National Guard components met their aggregate recruiting and retention goals, we compared accession and reenlistment goals to their actual figures for fiscal years 2000 through 2005. Additionally, through interviews with agency officials, we obtained data on the extent to which the components have instituted stop loss in some units, recalled members of the Individual Ready Reserve, and recruited new enlistees into their delayed entry programs. To determine the extent to which the components have met their authorized personnel levels for specific occupational specialties, we obtained data from the components on the number of servicemembers authorized and assigned for each occupational specialty as of September 30 for fiscal years 2000 through 2004 and June 30 for fiscal year 2005. We calculated the fill rate for each occupational specialty by dividing the number of servicemembers assigned to the occupational specialty by the authorization. We then counted the number of years that each occupational specialty in each component was over- or underfilled. Because most of the components have not identified acceptable thresholds for over- and underfilled occupational specialties, we used the strictest interpretation of these terms. In our analysis, if a component had one person more than its authorization, we considered the occupational specialty to be overfilled. Similiarly, if a component had one person less than its authorization, we considered the occupational specialty to be underfilled. If an occupational specialty was overfilled for at least 5 of the 6 years, we considered the occupational specialty “consistently” overfilled. Similarly, if an occupational specialty was underfilled for at least 5 of the 6 years, we considered the occupational specialty “consistently” underfilled. Some occupational specialties have changed over the 6-year period of our analyses. In these cases, we combined the original and new occupational specialties. For example, in the Army, the occupational specialty for Divers changed the identifier from 00B to 21D in 2004. We combined these two occupational specialties by summing the entries for each year and retaining it in our data set as one entry. In those cases where we could not determine how to merge occupational specialties, we retained the original data in our data set. To analyze the steps DOD and the components have taken to address their recruiting and retention difficulties, we interviewed key DOD officials from each component, including headquarters and recruiting commands, to obtain an understanding of recruiter, advertising, and incentive programs as well as overall recruiting and retention difficulties. We determined, through a review of DOD budget justifications for fiscal years 2000 through 2006, the associated costs of these programs. We also interviewed Office of the Secretary of Defense officials to understand overall DOD policies and future direction for reviewing recruiting and retention goals. We obtained and reviewed various accession plans, incentive programs, and marketing initiatives. To determine the extent to which personnel in consistently overfilled occupational specialties received bonuses, we obtained data from the active components on the occupational specialties for which servicemembers were qualified to receive enlistment or reenlistment bonuses in fiscal years 2000 through 2004 and for fiscal year 2005 as of June 30. We compared the occupational specialties for which servicemembers received bonuses to those occupational specialties that we found to be consistently overfilled, as well as those that were overfilled in any year. We limited our analysis to the active components because the reserve components tend to provide enlistment and reenlistment bonuses geographically. All dollar amounts were adjusted for inflation using the gross domestic product price index published by the Bureau of Economic Analysis. To determine the reliability of data obtained for this report, we interviewed personnel from each component knowledgeable about the data sources we used, inquiring about their methods for ensuring that the data were accurate. We reviewed available data for inconsistencies and, when applicable, followed up with personnel to assess data validity and reliability. We determined that the data were sufficiently reliable to answer our objectives. We conducted our work from January 2005 through October 2005 in accordance with generally accepted government auditing standards. Tables 12 and 13 show the active, Reserve, and National Guard components’ recruiting achievements for fiscal years 2000 through 2005. The Air Force introduced new metrics—Average Career Length and Cumulative Continuation Rates—in July 2005 to more accurately measure enlisted retention patterns. The tables presented in appendix IV show each component’s consistently over- and underfilled occupational specialties for fiscal year 2000 through June 2005. Derek Stewart, (202) 512-5559 ([email protected]) In addition to the individual named above, David E. Moser, Assistant Director; Michael A. Brown II; Jonathan Clark; Alissa H. Czyz; James A. Driggins; Joseph J. Faley; Neil D. Feldman; Ronald L. Fleshman; Shvetal Khanna; Marie A. Mak; Maewanda L. Michael-Jackson; Christopher R. Miller; Hilary L. Murrish; Brian D. Pegram; James W. Pearce; and Terry L. Richardson made key contributions to this report. | The Department of Defense (DOD) must recruit and retain hundreds of thousands of servicemembers each year to carry out its missions, including providing support in connection with events such as Hurricanes Katrina and Rita. In addition to meeting legislatively mandated aggregate personnel levels, each military component must also meet its authorized personnel requirements for each occupational specialty. DOD reports that over half of today's youth cannot meet the military's entry standards for education, aptitude, health, moral character, or other requirements, making recruiting a significant challenge. GAO, under the Comptroller General's authority (1) assessed the extent to which DOD's active, reserve, and National Guard components met their enlisted aggregate recruiting and retention goals; (2) assessed the extent to which the components met their authorized personnel levels for enlisted occupational specialties; and (3) analyzed the steps DOD has taken to address recruiting and retention challenges. DOD's active, reserve, and National Guard components met most aggregate recruiting and retention goals for enlisted personnel from fiscal years (FY) 2000-2004. However, for FY 2005, 5 of 10 components--the Army, Army Reserve, Army National Guard, Air National Guard, and Navy Reserve--missed their recruiting goals by 8 to 20 percent. Most of the components met their aggregate retention goals for FY 2000-2004, but the Navy experienced shortages in FY 2005 of up to 8 percent. Also, factors such as the shrinking numbers of new recruits in delayed entry programs and the use of stop loss, which delays servicemembers from leaving active duty, indicate that the components may experience future recruiting challenges. All components exceeded authorized personnel levels for some occupational specialties and did not meet others. Specifically, GAO found that 19 percent of DOD's 1,484 occupational specialties were consistently overfilled and 41 percent were consistently underfilled from FY 2000-2005. While the components offered reasons why occupational specialties may be over- or underfilled, GAO believes that consistently over- and underfilled occupational specialties are a systemic problem for DOD that raises two critical questions. First, what is the cost to the taxpayer to retain thousands more personnel than necessary in consistently overfilled occupational specialties? Second, how can DOD components continue to effectively execute their mission with consistently underfilled occupational specialties? In FY 2005, almost 31,000 more servicemembers than authorized served in occupational specialties that have been consistently overfilled. GAO determined that it costs the federal government about $103,000 annually, on average, to compensate each enlisted active duty servicemember in FY 2004. In contrast, DOD was unable to fill over 112,000 positions in consistently underfilled occupational specialties, raising concerns about the validity of the authorized personnel levels. DOD requires the active components to report on critical occupational specialties for recruiting and retention, which amounts to at most 16 percent of their 625 specialties. However, DOD does not require them to report on their noncritical occupational specialties, and does not require the reserve or National Guard components to report on any of their 859 specialties. Consequently, DOD does not have the necessary information to develop an effective plan to address the root causes of the components' recruiting and retention challenges. DOD has taken steps to enhance recruiting and retention, but lacks information on financial incentives provided for certain occupational specialties. GAO found that the components offered financial incentives to servicemembers in consistently overfilled occupational specialties. However, because DOD only requires the components to provide minimal justification for their use of financial incentives, it lacks the information needed to provide assurance to the Secretary of Defense, Congress, and the taxpayer that the increasing amount of funding spent on recruiting and retention is appropriately and effectively targeted to occupational specialties for which the components have the greatest need. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Remote barcoding is a part of the Service’s letter mail automation efforts that began in 1982. In the late 1980s, the Postal Service determined that it needed a system for barcoding the billions of letters containing addresses that cannot be read by the Service’s optical character readers. Remote barcoding entails making an electronic image of these letters. The images are electronically transmitted to remote barcoding sites where data entry operators enter enough address information into a computer to permit a barcode to be applied to the letter. The barcode allows automated equipment to sort letters at later stages in the processing and delivery chain. The Service made a decision in July 1991 to contract out remote barcoding based on a cost analysis that showed that contracting out would result in an expected savings of $4.3 billion over a 15-year period. The Service’s analysis was based on the pay levels and benefits that the Service expected to provide at that time, which exceed pay levels currently expected for in-house work. In November 1993, the Postal Service reversed its decision to contract out the remote barcoding function as a result of an arbitration award. The Service expected that agreeing to use postal employees for remote barcoding would improve its relations with APWU. In 1991, the Service had determined that contracting out was appropriate because (1) the remote barcoding workers would not touch the mail and security of the mail was not at risk, (2) much of the work would be part-time employment and result in lower overall costs, and (3) technological advances in optical character recognition would enable equipment to read this mail and eventually phase out the remote barcoding. As detailed in our earlier report on the Service’s automation program, the Postal Service’s plans for remote barcoding have since changed—it now anticipates increased use of the method with no phase-out date. On the basis of the expected total work load equivalent to 23 billion letter images per year and a processing rate of 750 images per console hour, we estimate that the Service will employ the equivalent of at least 17,000 operators for remote barcoding. This is a minimum based on console hours only and does not take into account such other time as supervision, management, and maintenance. In November 1990, the clerk and carrier unions filed national grievances challenging the Service’s plan to contract out remote barcoding services. Subsequent to its July 1991 decision, the Service awarded 2-year contracts (with an option to renew for a 2-year period) to 8 firms for remote barcoding services for 17 sites. In late 1992, additional remote barcoding deployment was put on hold pending the outcome of the grievances, which ultimately went to arbitration. On May 20, 1993, the arbitrator concluded that the Service failed to honor certain contractual rights of postal employees. The decision required the Service to first offer the jobs to those postal employees who were interested in and qualified for the jobs before contracting out for the remote barcoding service. The decision did not require that the jobs be offered to new postal hires, and Postal Service officials believed that an option such as specifying a few sites to be operated by postal employees and contracting out for the remaining ones would have complied with the arbitrator’s decision. On November 2, 1993, the Service agreed with APWU that remote barcoding jobs would be filled entirely by postal employees. In 1994, the Service resumed remote barcoding deployment, opening 14 remote barcoding sites where postal employees are to provide services for 22 mail processing plants. In September 1994, the Service converted two contractor sites serving two plants to in-house centers. It plans to convert the remaining sites by the end of 1996 and to eventually operate up to 75 centers that would serve 268 plants and process the equivalent of about 23 billion letters annually. Based on cost data provided by the Postal Service, we compared costs incurred during a 36-week period from July 23, 1994, through March 31, 1995, for remote barcoding at the 15 contractor facilities (17 until 2 were converted to in-house operation on September 6, 1994) and the Service’s 14 in-house facilities (16 after September 6, 1994). We estimated that the total direct cost of processing 1,000 images averaged $28.18 at the in-house centers compared to $26.61 at the contractor locations, a difference of 6 percent. The cost difference was the greatest at the beginning of the period when the in-house sites were getting started and stabilized at about a 6-percent difference during the last 3 accounting periods (12 weeks). About 2.8 billion images were processed in the Service’s centers during the 36-week period. We estimated that processing these images in the in-house facilities cost the Postal Service about $4.4 million, or 6 percent more than processing them in contractor-operated sites. The 6-percent difference will increase in the future as required changes in the mix of employees staffing the postal remote barcoding centers occur. The Service uses both career and transitional employees, who earn different wages and benefits. Transitional employees receive $9.74 an hour, Social Security benefits, and earn up to one-half day annual leave every 2 weeks. The career employees start at $11.44 an hour and receive health benefits, life insurance, retirement/Social Security benefits, a thrift savings plan, sick leave, and earn up to 1 day of annual leave every 2 weeks. For the postal remote barcoding sites we reviewed, 89 percent of the workhours were generated by transitional employees. By agreement with APWU, no more than 70 percent of the workhours in these centers is to be generated by transitional employees. The Service is working toward this level, and transitional employee workhours are declining while career workhours are increasing as the Service converts and replaces its transitional employees. We estimate that had the required 70/30 ratio of transitional to career employee workhours been achieved for our comparison period, the in-house cost would have been $30.33 per 1,000 images instead of $28.18, for a cost difference of about 14 percent instead of 6 percent. The Service projects that remote barcoding will eventually barcode about 31 billion letters annually. With the remote computer reader expected to reduce the need for keying by about 25 percent, we estimated that remote barcoding centers will eventually process the equivalent of about 23 billion letters annually. If the 6 percent cost differential and the current ratio of 89 percent transitional and 11 percent career workhours were continued, we estimated the in-house cost for this volume would be about $36 million more per year, not adjusted for inflation. If the cost differential we found continues, using postal employees would cost the Service about $86 million more per year, or 14 percent, not adjusted for inflation, when the required ratio of 70 percent transitional and 30 percent career workhours has been achieved. Benefits for transitional employees that are more comparable to those for career employees were at issue in the recent contract negotiations between the Service and APWU. It is reasonable to expect that wage and other cost increases may occur in the future for both in-house and contractor-operated sites. However, if the Service and APWU agree that transitional employees will receive additional benefits, the character of the jobs held by these employees will change, and the transitional employees will become more like career postal employees. Therefore, we also estimated the in-house and contract cost for remote barcoding if the cost of transitional employee benefits were the same as the cost of career employee benefits. On this basis, our estimate is that the differential would be about $174 million, or 28 percent, not adjusted for inflation. Using images per console hour as a measure, we determined that operator speed was similar between the contract sites and the in-house centers during the 36-week period. Contract keyers processed an average of 756 images per console hour, and postal employees processed 729 per hour. Figure 1 shows that differences in keying speed were the greatest at the beginning of the period and were more comparable at the end of it. The number of images per console hour was the best available measure we had for comparing the output of postal and contract employees. However, certain factors that are important to measuring performance were not similarly applied by the Postal Service and contractors. For example, contractors can receive a bonus for exceeding 650 images per hour and incur financial penalties for falling short of 640 images per hour. The Service requires its employees to maintain the standard of 650 images per hour, but no bonuses or penalties are involved. Accuracy standards are similar but involve financial penalties only for contractors. The program that measures errors at contractor sites was not used at postal sites at the time of our review. The Service and the unions are in negotiations over what methods will be used to monitor the accuracy of postal employee operators. Additionally, productivity data of both postal and contractor sites can be skewed if mail processing plants served by the sites do not process enough mail to keep the operators busy and they continue to be paid. The plants can also make operational decisions affecting whether a full or partial barcode is required from the remote barcoding site. Although partial barcodes are quicker to enter, thus increasing productivity in a specific center, this partially barcoded mail will have to be sorted at a higher cost somewhere downstream. The Service did not have data to break out images processed in-house and by contractors by full and partial barcoding. In commenting on a draft of this report, APWU said that the period we used for our comparison is unfair to the postal-operated sites because they were just starting up, and productivity is typically lower during such periods. As shown in figure 1 above, postal images per hour were initially lower than the contractor’s images per hour. For this reason, we did not include data from any Service-operated center during its initial 12-week training period. Figure 1 also shows that postal employee processed images per hour exceeded the contractor’s images per hour in accounting period 4. The cost difference from accounting period 4 until the end of the period was smaller than during the entire period. However, the difference did not consistently decrease throughout the period. As indicated in table 1, the difference was greater in the last accounting period than the average for both the period we used and the period recommended by the union during which the images processed per hour had leveled off. We believe that our comparison of costs over the nine accounting periods is preferable because it minimizes the effects of one-time and short-term fluctuations in cost and performance. For example, we are aware that contractor costs in the data included nonrecurring, extraordinary payments by the Postal Service of $888,000 (or 0.87 percent of contractor costs) for workers’ compensation claims at two sites. The claims covered a period beginning before our 36-week comparison period, but the Postal Service recorded the full cost in the period paid. Time did not permit us to analyze the cost data to identify and allocate all such extraordinary costs to the appropriate accounting periods. The Service’s use of transitional employees substantially reduced the difference expected earlier between contract and in-house costs. In its original decision in 1990 on obtaining remote barcoding services, the Postal Service estimated that over a 15-year period it could save about $4.3 billion by using contract employees. That estimate was based on using existing career level 6 pay scale employees with full pay and benefits. Under the November 1993 agreement with APWU, only 30 percent of the workhours are to be generated by career employees. This mix of transitional and career employees at the level 4 pay scale makes the Postal Service’s cost closer to the cost of contracting out. The return on investment was estimated at 35.7 percent to contract out. The Service’s cost comparison showed that the 70-30 mix of transitional and career workhours lowered the return on investment to 20.6 percent. Postal officials said this was still considered an acceptable return. The Service estimated that using level 4 pay scale career employees only would reduce the rate of return to 8 percent. In commenting on a draft of this report, APWU pointed out that an important reason for having postal employees do this work is that the remote barcoding program, originally considered temporary, is now a permanent part of mail processing operations, and thus eliminates a reason for having contractors do it. This same rationale could be put forth by APWU and/or the Service to eliminate the reason for having temporary or transitional employees do the barcoding. If this occurred, the cost of in-house barcoding would increase significantly. We estimate that if all of the in-house workhours had been generated by career employees at the pay and benefit level for the period under review, in-house keying costs would have exceeded contracting costs by 44 percent, or $267 million annually, based on a full production rate of 23 billion images per annum. Service and APWU officials we contacted believed that a principal advantage of bringing the remote barcoding in-house was anticipated improved working relationships. Contractor representatives we contacted believed there were a number of advantages to contracting out, including lower cost, higher productivity, and additional flexibility. The decision to bring the remote barcoding in-house was not primarily an economic one since the Postal Service recognized it would cost more than contracting out. Postal officials expected that using postal employees for remote barcoding would improve their relations with APWU. On November 2, 1993, when the Service decided to use postal employees for remote barcoding, the Service and APWU signed a memorandum on labor-management cooperation. This memorandum was in addition to an agreement signed by the Service’s Vice President for Labor Relations and the President of APWU the same day for the use of postal employees to do remote barcoding in full settlement of all Service-APWU issues relating to implementing remote barcoding. The cooperation memorandum included six principles (see app. I) of mutual commitment to improve Service-APWU relationships throughout the Postal Service. It specified that the parties “must establish a relationship built on mutual trust and a determination to explore and resolve issues jointly.” The Postal Service’s Vice President for Labor Relations and the President of APWU said that relations improved somewhat after the November 1993 agreements. The Vice President said that the decision to use postal employees for remote barcoding was “a very close call,” but the agreements seemed to have the effect of improving discussions during the contract negotiations that had begun with the Service in 1994. He also said that APWU initially made offers in contract negotiations that looked good to the Postal Service. Subsequent to the negotiations, however, the Vice President told us that he no longer believed that the experiment in cooperation with APWU was going to improve relations. According to the Vice President, APWU seemed to have disavowed the financial foundation for the remote barcoding agreement by proposing to (1) increase transitional employees’ wages by more than 32 percent over the life of the new contract and (2) provide health benefits for transitional employees. The Postal Service believes these actions would destroy the significance of the 70/30 employee workhour mix. Further, the Vice President said that APWU continues to be responsible for more than 75 percent of pending grievances and related arbitrations, which had increased substantially from the previous year. The President of APWU said that having the remote barcoding work done by postal employees was allowing the Service and the union to build new relations from the “ground up.” He said that the cooperation memorandum mentioned above was incidental to the more fundamental agreement of the same date for postal management and the union to establish and maintain remote barcoding sites, working together through joint committees of Service and union officials. Poor relations between postal management and APWU and NALC, including a strike, were a factor prompting Congress to pass the Postal Reorganization Act of 1970. We reported in September 1994 that relations between postal management and labor unions continued to be acrimonious. When negotiating new wage rates and employee benefits, the Service and the clerks and carriers have been able to reach agreement six out of nine times. However, for three of the last four times, the disputes proceeded to binding arbitration. Our September 1994 report detailed numerous problems on the workroom floor that management and the labor unions needed to address. We recommended that, as a starting point, the Service and all the unions and management associations negotiate a long-term framework agreement to demonstrate a commitment to improving working relations. Our follow-up work showed that the Postal Service and APWU are still having difficulty reaching bilateral agreements. Following the 1993 cooperation agreement, the Postal Service and APWU began negotiations for a new contract to replace the 4-year contract that expired in November 1994. No final and complete agreement could be reached on all subjects in the negotiations, and the parties mutually agreed to engage in a period of mediation. The Postal Service and APWU did not reach agreement for a new contract, and the dispute has now been referred to an arbitrator as provided for in the 1970 act. Further, the Postal Service and APWU, as well as two of the three other major unions, have been unable to agree to meet on an overall framework agreement that we recommended to deal with longstanding labor-management problems on the workroom floor detailed in our September 1994 report. In response to our report, the Postmaster General invited the leadership of all unions and management associations to a national summit to begin formulating such an agreement. APWU, NALC, and the National Postal Mailhandlers Union did not accept the invitation, saying that the negotiation of new contracts needed to be completed first. Service officials, union officials, and contractor representatives we contacted cited other advantages and disadvantages of using postal employees rather than contractors for remote barcoding. The Vice President for Labor Relations said that the mix of transitional and career employees may create some management problems. He said the different types of employees receiving different wage rates and benefits, but working side by side doing the same work at remote barcoding sites, may create employee morale problems. However, he also said that the career-transitional mix provided the Service with the advantage of offering transitional employees opportunities for career postal jobs. APWU officials said that remote barcoding is an integral part of mail processing and relies upon rapidly evolving technology, which they believed should not be separated into in-house and contractor operations because of a potential loss of management control and flexibility. They also said that the decision to use postal employees for remote barcoding was justified on the basis of cost studies by the Service showing a favorable return on investment. Contractor representatives cited a number of advantages to using contract employees. They said that, for a variety of reasons, contractor sites are less costly than postal sites. They believed that contract employees operate at higher productivity rates because contractors, unlike the Postal Service, can provide incentive pay that results in higher keying rates. They also said that contractors can exercise more flexibility in handling variations in mail volume levels because of procedures for adjusting staffing levels on 2-hour notice, as provided in the contracts. However, Service officials pointed out that under the 1993 agreement with APWU, transitional employees can be sent home without notice if work is not available, but the career employees can not. Our objectives were to (1) compare, insofar as postal data were available, the direct costs of contracting out remote barcoding with the direct costs of having the work done by postal employees; and (2) identify possible advantages and disadvantages of using postal employees rather than contractors to do the work. At Postal Service headquarters, we interviewed Service officials responsible for remote barcoding implementation and contracting, as well as those responsible for the Service’s labor relations and financial management. We met on two occasions with the President of the American Postal Workers Union and other union officials and with three representatives of remote barcoding contractors to obtain their views on the advantages and disadvantages of using postal employees for remote barcoding services. We visited two remote barcoding sites: the contractor site in Salem, VA, and the Lynchburg, VA, site, which recently converted to in-house operation. We also reviewed, but did not verify to underlying source records, Postal Service data on costs associated with remote barcoding done by contract and postal employees. Further, we confirmed our understanding of remote barcoding and verified some of our information by reviewing the results of related work done in March and April 1995 by the Postal Inspection Service. The Inspection Service did its work at five remote barcoding sites (three Service-operated, including one recently converted from contractor-operated, and two contractor-operated) to compare and contrast certain administration and management practices followed at the sites. Details on our cost comparison methodology are contained in appendix II. A draft of this report was provided to heads of the Postal Service, APWU, and the Contract Services Association of America for comment in April 1995. Subsequent to the initial distribution of the draft, the Postal Service provided us with revised cost data. We provided a revised draft to the three organizations prior to completion of the comment process, and the comments received were based on the second draft. We did our work from March through June 1995 in accordance with generally accepted government auditing standards. The Postal Service, APWU, and the Contract Services Association of America provided written comments on a draft of this report. The Postal Service concurred with the information contained in the report regarding the costs of remote barcoding in contractor and postal operated sites and the reasons for bringing the work in-house. The Service said that it had hoped that bringing the remote barcoding work in-house would foster better relations with APWU. The Service expressed disappointment that APWU continued to maintain an adversarial posture that hindered progress toward improving their relationship. (See app. III for the text of the Postal Service’s comments.) APWU characterized our draft report as being inaccurate and substantially biased. It also expressed the opinion that a report on this subject is premature because the data necessary for adequate evaluation are not yet available. More specifically, APWU said that the draft report (1) overstated the cost of in-house barcoding, (2) understated the costs of contracting out, (3) ignored important considerations that favor doing the work in-house, and (4) understated the significance of improvements in labor relations made possible by the APWU/Postal Service agreement to do remote barcoding in-house. APWU criticized the draft report as being premature because we used data from a period when postal remote barcoding facilities were just beginning operations, while contractor facilities represented mature operations, thereby overstating the cost of in-house operations. It said that this mature versus start-up comparison imparted a serious bias to our estimate of the cost differential. While we agree that the longer the period of comparison the more preferable, a longer period did not exist for the comparison we were asked to perform. It is also important to note that we excluded from the 36-week time period we used for our cost comparison the initial 12-week training period that each in-house site experienced before becoming operational. In response to APWU’s comments, we clarified our text to more clearly convey that our comparison excluded the 12-week training period for the in-house sites. We also further analyzed the data to identify variances in costs during the 36-week period, especially the later part of the 36-week period, when in-house sites were more mature. This analysis showed that in-house operations were consistently more expensive than contractor operations. We noted that the in-house operations will become more expensive if the workforce mix changes to include more career employees and fewer transitional employees as is presently planned, and/or if the transitional employees receive increased benefits. We also qualified our estimates of future costs by pointing out that circumstances could change and discussing how that might happen. APWU asserted that the draft report understated the cost of contracting for remote barcoding because we ignored such potential costs as overruns by government contractors and future strikes by contract employees. We did not ignore the possibility of increased contractor costs. We limited our cost analysis to actual costs because we had no basis for assigning dollar values to possible future events, such as employee strikes and potential cost overruns by contractors. Instead, we provided a narrative discussion of such factors. We expanded our discussion of these factors in response to APWU’s comments. APWU also said that the draft report ignored important considerations favoring in-house operations, such as the importance to postal managers of maintaining full integration and control of the barcoding effort. APWU asserted that in-house operations are inherently preferable from a management point of view. We do not believe that this necessarily holds true. A broad body of work we have done in other areas shows some successes and economies that have resulted from contracting out certain activities by various federal, state, and local governments. APWU also said that the draft report understated the significance of improvements in labor relations made possible by the agreement between APWU and the Postal Service to perform remote barcoding in-house. APWU characterized the agreement as a cornerstone of the parties’ efforts to build a constructive and productive relationship and cited some examples that it considered to be representative of positive progress in efforts to improve the relationship between the parties. After receiving APWU’s comments, we revisited with Postal Service officials the issue of the effect of the agreement on labor management relations to assure ourselves that we had correctly characterized the Postal Service’s position. The officials confirmed that we had, explaining that while the Postal Service believed at the time that the agreement was reached it would have a positive effect, the Service now believes that its relationship with APWU has deteriorated since the 1993 agreement. We added language to further ensure that the final report presents a balanced discussion of the differing views of the affected parties. (See app. IV for the text of APWU’s comments and our detailed response to these comments.) The Contract Services Association of America believed we should have put more information into our report regarding what the Association said was a complete breakdown in the Postal Service’s labor-management relations. In view of our previous extensive work evaluating the state of labor-management relations in the Postal Service, we did not evaluate labor-management relations; but at various places in the report, we describe the various parties’ perceptions of the labor-management relationship. The Contract Services Association of America also offered other comments and technical clarifications, which we incorporated in the report where appropriate. (See app. V for the text of the Contract Services Association of America’s comments.) We are providing copies of this report to Senate and House postal oversight and appropriation committees, the Postmaster General, the Postal Service Board of Governors, the Postal Rate Commission, the American Postal Workers Union, and other interested parties. Major contributors to the report are listed in appendix VI. If you have any questions, please call me on (202) 512-8387. “1. The APWU and the Postal Service hereby reaffirm their commitment to and support for labor-management cooperation at all levels of the organization to ensure a productive labor relations climate which should result in a better working environment for employees and to ensure the continued viability and success of the Postal Service. “2. The parties recognize that this commitment and support shall be manifested by cooperative dealings between management and the Union leadership which serves as the spokesperson for the employees whom they represent. “3. The parties recognize that the Postal Service operates in a competitive environment and understand that each Postal Service product is subject to volume diversion. Therefore, it is imperative that management and the Union jointly pursue strategies which emphasize improving employee working conditions and satisfying the customer in terms of service and costs. A more cooperative approach in dealings between management and APWU officials is encouraged on all issues in order to build a more efficient Postal Service. “4. The Postal Service recognizes the value of Union involvement in the decision making process and respects the right of the APWU to represent bargaining unit employees. In this regard, the Postal Service will work with and through the national, regional, and local Union leadership, rather than directly with employees on issues which affect working conditions and will seek ways of improving customer service, increasing revenue, and reducing postal costs. Management also recognizes the value of union input and a cooperative approach on issues that will affect working conditions and Postal Service policies. The parties affirm their intent to jointly discuss such issues prior to the development of such plans or policies. “5. The APWU and the Postal Service approve the concept of joint meetings among all organizations on issues of interest to all employees, but which are not directly related to wages, hours or working conditions, such as customer service, the financial performance of the organization and community-related activities. In this regard, the APWU will participate in joint efforts with management and other employee organizations to address these and other similar issues of mutual interest. “6. On matters directly affecting wages, hours or working conditions, the Postal Service and the APWU recognize that separate labor-management meetings involving only the affected Union or Unions are necessary. The parties are encouraged to discuss, explore, and resolve these issues, provided neither party shall attempt to change or vary the terms or provisions of the National Agreement.” The Postal Service’s fiscal year is made up of 13 4-week accounting periods. The time period we selected for comparing the cost of contract and in-house remote barcoding included nine accounting periods (36 weeks) from July 23, 1994, through March 31, 1995. We selected the July 23, 1994, date because this was the first day of the first accounting period after the Service-operated remote barcoding centers completed the 12-week training period for the first system. We then included data on each in-house center for the first full accounting period following the period in which the 12-week training period was completed. We did not include two centers (Lumberton, NC, and Laredo, TX) for the accounting period in which they were converted to in-house sites. We determined direct costs incurred by the in-house centers as reflected by the Postal Service Financial Reporting System and contract records for the selected accounting periods. This included all significant costs, such as the pay and benefits for employees and on-site supervisors and managers (about 94 percent of the direct cost), equipment maintenance, communication lines, travel, training, rent, utilities, and supplies. To this we added factors for Service-wide employee compensation not charged directly to any postal operations. These included the Postal Service’s payments for certain retirement, health and life insurance, and workers compensation costs, and increases in accrued leave liability due to pay raises. According to Postal Service data, these additional compensation costs ranged between 1.3 and 8.9 percent of direct pay and benefits for transitional and career employees in 1994 and 1995. Except for contract administration personnel, we did not allocate any headquarters costs to the in-house or contractor sites. This was because these costs were unlikely to be significantly different regardless of whether the sites were contracted out or operated in-house. Postal Service area offices incurred some cost for remote barcoding. Some area offices had appointed remote barcoding system coordinators, who spent some time assisting and overseeing the postal sites. Their level of involvement in the centers varied from area to area, and data on the amount of involvement were not readily available centrally. We did not attempt to estimate this cost because of the lack of data and because we do not believe it would have been large enough to materially affect our results. For the contractor sites, we used the actual contract cost to the Postal Service, which included the full cost of the remote barcoding services, except for equipment maintenance. We added the contract cost of maintenance for the equipment at the contractor sites, which was provided by the Postal Service to the contractors. We also added the cost of Postal Service personnel involved in administering the contracts, both at headquarters and at the facilities serviced by the coding centers. The estimate of this cost was provided by the Postal Service. The following are GAO’s comments on the letter dated July 14, 1995, from the American Postal Workers Union. 1. In light of APWU’s view that the 36-week period we used was not representative, we included an additional analysis in the report covering shorter and more recent time periods. This analysis shows that the cost difference varies depending on the period selected. Using the most recent 4-week period, the cost for in-house keying was greater than for the full 36-week period. However, because costs for any given period can contain extraordinary payments, we believe comparison periods should be as long as feasible to minimize the effects of those nonrecurring costs. 2. APWU suggested that our analysis failed to recognize some of the direct costs associated with the entire remote barcoding program, including capital costs. The total cost of the remote barcoding program was not the focus of our review. Our objective was to compare the direct cost of performing remote keying services in-house versus under contract. Where the cost to the Postal Service was the same whether the work was to be done in-house or by contract, we did not include such cost in our comparison. This methodology is consistent with the Service’s Guidelines for the Preparation of Cost Data for Comparison With Contracting Out Proposals. Using this approach, we did not include such costs as video display terminals, keyboards, and computers, for example, that were provided as government-furnished equipment to the contractors and also used at postal-operated sites. Our report discloses in appendix II the cost elements that we considered in our comparison and identifies cost elements not considered. 3. APWU asserted that the draft report understated the cost of contracting for remote barcoding because we ignored such potential costs as overruns by government contractors. It is true that we have reported on cost overruns incurred by government contractors. However, our reports citing contractor overruns were based on after-the-fact evaluations of actual contract costs compared to estimated contract costs. In addition, many instances of cost overruns occur when the scope of work is not well defined and deals with advanced technologies. This does not appear to be the case in remote barcoding where the scope of work is well defined. In addition, it would not be appropriate for us to speculate about the future cost that might be incurred by the Service’s remote keying contractors. 4. APWU said that our draft report ignored important reasons for having postal employees do remote barcoding, citing as one reason that the remote barcoding program is no longer considered temporary. While the point that the remote barcoding program is no longer considered a temporary program would be a valid consideration in a decision on whether to contract out, it was not cited by Postal Service officials in any records we reviewed or in our discussion with Service officials as a reason for having postal employees do the work. Rather, the reasons were related primarily to anticipated improvements in the Service’s relations with APWU. We estimate that if all of the in-house workhours had been generated by career employees at the pay and benefit level for the period under review, in-house keying costs would have exceeded contracting costs by 44 percent, or $267 million annually, based on a full production rate of 23 billion images per annum. 5. APWU said that our analysis did not take into consideration several contractor costs that could be passed on to the Postal Service. APWU said that it and several other unions were prepared to organize contractor employees and that even moderate organizing success would change the results of our cost analysis. As an example, APWU pointed to one contractor site where the contractors’ employees received health benefits. APWU apparently did not understand that we had in fact included these health benefit costs in our comparison. We agree that potential future costs could affect the cost differential if they occur; however, we have no basis for anticipating what the dollar value of such costs might be. Thus, we used actual cost data when available and discussed in narrative fashion possible changes in circumstances that might affect future costs. 6. APWU said that while our draft report observed that contract employees can receive a bonus for exceeding 650 images per hour, we did not estimate the cost impact of these potential bonuses. The costs for contracting out that we used in our estimates included the cost of actual bonuses paid to contractors for exceeding the standard of 650 images per hour and thus include the cost impact of this factor. We had no basis for estimating how bonuses may change in future periods. 7. APWU stated that the draft report failed to analyze barcoding error rates. The cost for contracting out that we used included penalties assessed against contractors for exceeding the maximum 3-percent error rate. We revised the text to clarify the reason that we could not compare error rates of postal employees and contract employees. 8. We recognize in the report that APWU believes that the agreement to bring the remote barcoding in-house has improved labor relations. However, the report also recognizes that this view does not agree with the Postal Service’s view. Moreover, the Postmaster General has recently said that it is clear that the collective bargaining process is broken. We deleted the word rarely and revised the text to reflect that the union has gone to interest arbitration three out of nine times. We made no judgments about the attitudes of postal employees. Rather, our report attributes to a Postal Service official the comment that a potential employee morale problem could result from the mix of transitional and career employees. 9. APWU said that the draft report was a biased document requested by a Subcommittee of the Committee on Appropriations for political reasons, including pressure to affect collective bargaining positions. The Subcommittee has not suggested to us in any way what the results of our analysis should be. We approached this assignment like all others, attempting to meet our customer’s legitimate oversight needs in an objective, independent, and timely manner. 10. APWU stated that our initial draft was flawed. As explained in our Objectives, Scope, and Methodology section of this report (see p. 12), subsequent to the initial distribution of a draft of this report, the Postal Service provided us with revised cost data. We provided a revised draft to APWU prior to completion of the comment process. We considered the comments of APWU in preparing this report. We received APWU comments in two meetings, both of which were attended by the APWU President, other APWU officials, and outside legal and economic advisers to APWU. APWU also provided written comments on a draft of this report, which are included in full. 11. APWU stated that the draft is still flawed, biased, and largely invalid. We believe that the data included in our report provide a fair (and best available) representation of the actual cost of operating remote barcoding sites by the Postal Service and by contractors for the periods indicated. As stated in the report, future cost differentials will depend on the circumstances at that time. 12. APWU believed that our use of a Postal Service analysis performed in prior years was misleading. We included the Service’s 1990 cost estimate because it led to the decision, followed until 1993, to use contractors for all remote barcoding services. We revised the text to reflect that the original Postal Service estimate was based on level 6 employees and that currently level 4 employees do the work at in-house sites. 13. In summary, APWU said that our draft report was inaccurate and substantially biased. APWU urged us to ensure that the final report is sufficiently balanced and appropriately qualified. We reviewed the draft report to further ensure that it presented the results of our analysis clearly and with a balanced tone. As discussed in our preceding comments, we added information and language where we thought it helped to clarify the report’s message or the positions of the affected parties. James T. Campbell, Assistant Director Anne M. Hilleary Leonard G. Hoglan Loretta K. Walch The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO compared the direct costs to the U.S. Postal Service of contracting out for remote barcoding services versus having the work done by postal employees, focusing on the advantages and disadvantages of using postal employees for these services. GAO found that: (1) in-house barcoding would cost an estimated 6 percent more than using contractors, based on a mix of 89 percent transitional and 11 percent career employee workhours; (2) the cost differential is expected to increase to 14 percent annually to process 23 billion letters, based on an union agreement of 70 percent transitional and 30 percent career employee workhours; (3) if transitional employees receive benefits similar to career employees, as the union has requested, the cost differential would increase to 28 percent or $174 million annually; (4) using postal employees for barcoding would result in improved relations with the union; and (5) the postal union believes that using postal employees for barcoding provides the opportunity for the Postal Service and the union to cooperate in establishing and operating remote barcoding sites. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
From April 24 through September 11, 2000, the U.S. Census Bureau surveyed a sample of about 314,000 housing units (about 1.4 million census and A.C.E. records in various areas of the country, including Puerto Rico) to estimate the number of people and housing units missed or counted more than once in the census and to evaluate the final census counts. Temporary bureau staff conducted the surveys by telephone and in-person visits. The A.C.E. sample consisted of about 12,000 “clusters” or geographic areas that each contained about 20 to 30 housing units. The bureau selected sample clusters to be representative of the nation as a whole, relying on variables such as state, race and ethnicity, owner or renter, as well as the size of each cluster and whether the cluster was on an American Indian reservation. The bureau canvassed the A.C.E. sample area, developed an address list, and collected response data for persons living in the sample area on Census Day (April 1, 2000). Although the bureau’s A.C.E. data and address list were collected and maintained separately from the bureau’s census work, A.C.E. processes were similar to those of the census. After the census and A.C.E. data collection operations were completed, the bureau attempted to match each person counted by A.C.E. to the list of persons counted by the census in the sample areas to determine the number of persons who lived in the sample area on Census Day. The results of the matching process, together with the characteristics of each person compared, provided the basis for statistical estimates of the number and characteristics of the population missed or improperly counted by the census. Correctly matching A.C.E. persons with census persons is important because errors in even a small percentage of records can significantly affect the undercount or overcount estimate. Matching over 1.4 million census and A.C.E. records was a complex and often labor-intensive process. Although several key matching tasks were automated and used prespecified decision rules, other tasks were carried out by trained bureau staff who used their judgment to match and code records. The four phases of the person matching process were (1) computer matching, (2) clerical matching, (3) nationwide field follow- up on records requiring more information, and (4) a second phase of clerical matching after field follow-up. Each subsequent phase used additional information and matching rules in an attempt to match records that the previous phase could not link. (first phase) (second phase) Computer matching took pairs of census and A.C.E. records and compared various personal characteristics such as name, age, and gender. The computer then calculated a match score for the paired records based on the extent to which the personal characteristics were aligned. Experienced bureau staff reviewed the lists of paired records, sorted by their match scores, and judgmentally assigned cutoff scores. The cutoff scores were break points used to categorize the paired records into one of three groups so that the records could be coded as a “match,” “possible match,” or one of a number of codes that defines them as not matched. Computer matching successfully assigned a match score to nearly 1 million of the more than 1.4 million records reviewed (about 66 percent). Bureau staff documented the cutoff scores for each of the match groups. However, they did not document the criteria or rules used to determine cutoff scores, the logic of how they applied them, and examples of their application . As a result, the bureau may not benefit from the possible lessons learned on how to apply cutoff scores. When the computer links few records as possible matches, clerks will spend more time searching records and linking them. In contrast, when the computer links many records as possible matches, clerks will spend less time searching for records to link and more time unlinking them. Without documentation and knowledge of the effect of cutoff scores on clerical matching productivity, future bureau staff will be less able to determine whether to set cutoff scores to link few or many records together as possible matches. (first phase) (second phase) During clerical matching, three levels of matchers—including over 200 clerks, about 40 technicians, and 10 experienced analysts or “expert matchers”—applied their expertise and judgment to manually match and code records. A computer software system managed the workflow of the clerical matching stages. The system also provided access to additional information, such as electronic images of census questionnaires that could assist matchers in applying criteria to match records. According to a bureau official, a benefit of clerical matching was that records of entire households could be reviewed together, rather than just individually as in computer matching. During this phase over a quarter million records (or about 19 percent) were assigned a final match code. The bureau taught clerks how to code records in situations in which the A.C.E. and census records differed because one record contained a nickname and the other contained the birth name. The bureau also taught clerks how to code records with abbreviations, spelling differences, middle names used as first names, and first and last names reversed. These criteria were well documented in both the bureau’s procedures and operations memorandums and clerical matchers’ training materials, but how the criteria were applied depended on the judgment of the matchers. The bureau trained clerks and technicians for this complex work using as examples some of the most challenging records from the 1998 Dress Rehearsal person matching operation. In addition, the analysts had extensive matching experience. For example, the 4 analysts that we interviewed had an average of 10 years of matching experience on other decennial census surveys and were directly involved in developing the training materials for the technicians and clerks. (first phase) (second phase) The bureau conducted a nationwide field follow-up on over 213,000 records (or about 15 percent) for which the bureau needed additional information before it could accurately assign a match code. For example, sometimes matchers needed additional information to verify that possibly matched records were actually records of the same person, that a housing unit was located in the sample area on Census Day, or that a person lived in the sample area on Census Day. Field follow-up questionnaires were printed at the National Processing Center and sent to the appropriate A.C.E. regional office. Field follow-up interviewers from the bureau’s regional offices were required to visit specified housing units and obtain information from a knowledgeable respondent. If the household member for the record in question still lived at the A.C.E. address at the time of the interview and was not available to be interviewed after six attempts, field follow-up interviewers were allowed to obtain information from one or more knowledgeable proxy respondents, such as a landlord or neighbor. (first phase) (second phase) The second phase of clerical matching used the information obtained during field follow-up in an attempt to assign a final match code to records. As in the first phase of clerical matching, the criteria used to match and code records were well documented in both the bureau’s procedures and operations memorandums and clerical matchers’ training materials. Nevertheless, in applying those criteria, clerical matchers had to use their own judgment and expertise. This was particularly true when matching records that contained incomplete and inconsistent information, as noted in the following examples. Different household members provided conflicting information. The census counted one person—the field follow-up respondent. A.C.E. recorded four persons—including the respondent and her daughter. The respondent, during field follow-up, reported that all four persons recorded by A.C.E. lived at the housing unit on Census Day. During the field follow-up interview, the respondent’s daughter came to the house and disagreed with the respondent. The interviewer changed the answers on the field follow-up questionnaire to reflect what the daughter said— the respondent was the only person living at the household address on Census Day. The other three people were coded as not living at the household address on Census Day. According to bureau staff, the daughter’s response seemed more reliable. An interviewer’s notes on the field follow-up questionnaire conflicted with recorded information. The census counted 13 people—including the respondent and 2 people not matched to A.C.E. records. A.C.E. recorded 12 people—including the respondent, 10 other matched people, and the respondent’s daughter who was not matched to census records. The field follow-up interview attempted to resolve the unmatched census and A.C.E. people. Answers to questions on the field follow-up questionnaire verified that the daughter lived at the housing address on Census Day. However, the interviewer’s notes indicated that the daughter and the respondent were living in a shelter on Census Day. The daughter was coded as not living at the household address on Census Day, while the respondent remained coded as matched and living at the household address on Census Day. According to bureau staff, the respondent should also have been coded as a person that did not live at the household address on Census Day, based on the notes on the field follow-up questionnaire. A.C.E., census, or both counted people at the wrong address. The census counted two people—the respondent and her husband—twice; once in an apartment and once in a business office that the husband worked in, both in the same apartment building. The A.C.E. did not record anyone at either location, as the residential apartment was not in the A.C.E. interview sample. The respondent, during field follow-up, reported that they lived at their apartment on Census Day and not at the business office. The couple had responded to the census on a questionnaire delivered to the business office. A census enumerator, following up on the “nonresponse” from the couple’s apartment, had obtained census information from a neighbor about the couple. The couple, as recorded by the census at the business office address, was coded as correctly counted in the census. The couple, as recorded by the census at the apartment address, was coded as living outside the sample block. According to bureau staff, the couple recorded at the business office address were correctly coded, but the couple recorded at the apartment should have been coded as duplicates. An uncooperative household respondent provided partial or no information. The census counted a family of four—the respondent, his wife, and two daughters. A.C.E. recorded a family of three—the same husband and wife, but a different daughter’s name, “Buffy.” The field follow-up interview covered the unmatched daughters—two from census and one from A.C.E. The respondent confirmed that the four people counted by the census were his family and that “Buffy” was a nickname for one of his two daughters, but he would not identify which one. The interviewer wrote in the notes that the respondent “was upset with the number of visits” to his house. “Buffy” was coded as a match to one of the daughters; the other daughter was coded as counted in the census but missed by A.C.E. According to bureau staff, since the respondent confirmed that “Buffy” was a match for one of his daughters—although not which one—and that four people lived at the household address on Census Day, they did not want one of the daughters coded so that she was possibly counted as a missed census person. Since each record had to have a code identifying whether it was a match by the end of the second clerical matching phase, records that did not contain enough information after field follow-up to be assigned any other code were coded as “unresolved.” The bureau later imputed the match code results for these records using statistical methods. While imputation for some situations may be unavoidable, it introduces uncertainty into estimates of census over- or undercount rates. The following are examples of situations that resulted in records coded as “unresolved.” Conflicting information was provided for the same household. The census counted four people—a woman, an “unmarried partner,” and two children. A.C.E. recorded three people—the same woman and two children. During field follow-up, the woman reported to the field follow- up interviewer that the “unmarried partner” did not really live at the household address, but just came around to baby-sit, and that she did not know where he lived on Census Day. According to bureau staff, probing questions during field follow-up determined that the “unmarried partner” should not have been coded as living at the housing unit on Census Day. Therefore, the “unmarried partner” was coded as “unresolved.” A proxy respondent provided conflicting or inaccurate information. The census counted one person—a female renter. A.C.E. did not record anyone. The apartment building manager, who was interviewed during field follow-up, reported that the woman had moved out of the household address sometime in February 2000, but the manager did not know the woman’s Census Day address. The same manager had responded to an enumerator questionnaire for the census in June 2000 and had reported that the woman did live at the household address on Census Day. The woman was coded as “unresolved.” The bureau employed a series of quality assurance procedures for each phase of person matching. The bureau reported that person matching quality assurance was successful at minimizing errors because the quality assurance procedures found error rates of less than 1 percent. Clerks were to review all of the match results to ensure, among other things, that the records linked by the computer were not duplicates and contained valid and complete names. Moreover, according to bureau officials, the software used to link records had proven itself during a similar operation conducted for the 1990 Census . The bureau did not report separately on the quality of computer matched records. Although there were no formal quality assurance results from computer matching, at our request the bureau tabulated the number of records that the computer had coded as “matched” that had subsequently been coded otherwise. According to the bureau, the subsequent matching process resulted in a different match code for about 0.6 percent of the almost 500,000 records initially coded as matched by the computer. Of those records having their codes changed by later matching phases, over half were eventually coded as duplicates and almost all of the remainder were rematched to someone else. Technicians reviewed the work of clerks and analysts reviewed the work of technicians primarily to find clerical errors that (1) would have prevented records from being sent to field follow-up, (2) could cause a record to be incorrectly coded as either properly or erroneously counted by the census, or (3) would cause a record to be incorrectly removed from the A.C.E. sample. Analysts’ work was not reviewed. Clerks and technicians with error rates of less than 4 percent had a random sample of about 25 percent of their work reviewed, while clerks and technicians exceeding the error threshold had 100 percent of their work reviewed. About 98 percent of clerks in the first phase of matching had only a sample of their work reviewed. According to bureau data, less than 1 percent of match decisions were revised during quality assurance reviews, leading the bureau to conclude that clerical matching quality assurance was successful. Under certain circumstances, technicians and analysts performed additional reviews of clerks’ and technicians’ work. For example, if during the first phase of clerical matching a technician had reviewed and changed more than half of a clerk’s match codes in a given geographic cluster, the cluster was flagged for an analyst to review all of the clerk and technician coding for that area. During the second phase, analysts were required to make similar reviews when only one of the records was flagged for their review. This is one of the reasons why, as illustrated in figure 2, these additional reviews were a much more substantial part of the clerks’ and technicians’ workload that was subsequently reviewed by more senior matchers. The total percentage of workload reviewed ranged from about 20 to 60 percent across phases of clerical matching, far in excess of the 11- percent quality assurance level for the bureau’s person interviewing operation. The quality assurance plan for the field follow-up phase had two general purposes: (1) to ensure that questionnaires had been completed properly and legibly and (2) to detect falsification. Supervisors initially reviewed each questionnaire for legibility and completeness. These reviews also checked the responses for consistency. Office staff were to conduct similar reviews of each questionnaire. To detect falsification, the bureau was to review and edit each questionnaire at least twice and recontact a random sample of 5 percent of the respondents. As shown in figure 3, all 12 of the A.C.E. regional offices exceeded the 5 percent requirement by selecting more than 7 percent of their workload for quality assurance review, and the national rate of quality assurance review was about 10 percent. At the local level, however, there was greater variation. There are many reasons why the quality assurance coverage can appear to vary locally. For example, a local census area could have a low quality assurance coverage rate because interviewers in that area had their work reviewed in other areas, or the area could have had an extremely small field follow-up workload, making the difference of just one quality assurance questionnaire constitute a large percentage of the local workload. Seventeen local census office areas (out of 520 nationally, including Puerto Rico) had 20 percent or more of field follow-up interviews covered by the quality assurance program, and, at the other extreme, 5 local census areas had 5 percent or less of the work covered by the quality assurance program. Less than 1 percent of the randomly selected questionnaires failed quality assurance nationally, leading the bureau to report this quality assurance operation as successful. When recontacting respondents to detect falsification by interviewers, quality assurance supervisors were to determine whether the household had been contacted by an interviewer, and if it had not, the record of that household failed quality assurance. According to bureau data, about 0.8 percent of the randomly selected quality assurance questionnaires failed quality assurance nationally. This percentage varied between 0 and about 3 percent across regions. The bureau carried out person matching as planned, with only a few procedural deviations. Although the bureau took action to address these deviations, it has not determined how matching results were affected. As shown in table 1, these deviations included (1) census files that were delivered late, (2) a programming error in the clerical matching software, (3) printing errors in field follow-up forms, (4) regional offices that sent back incomplete questionnaires, and (5) the need for additional time to complete the second phase of clerical matching. It is unknown what, if any, cumulative effect these procedural deviations may have had on the quality of matching for these records or on the resultant A.C.E. estimates of census undercounts. However, bureau officials believe that the effect of the deviations was small based on the timely responses taken to address them. The bureau conducted reinterviewing and re-matching studies on samples of the 2000 A.C.E. sample and concluded that matching quality in 2000 was improved over that in 1990, but that error introduced during matching operations remained and contributed to an overstatement of A.C.E. estimates of the census undercounts. The studies provided some categorical descriptions of the types of matching errors measured, but did not identify the procedural causes, if any, for those errors. Furthermore, despite the improvement in matching reported by the bureau, A.C.E. results were not used to adjust the census due to these errors as well as other remaining uncertainties. The bureau has reported that additional review and analysis on these remaining uncertainties would be necessary before any potential uses of these data can be considered. The computer matching phase started 3 days later than scheduled and finished 1 day late due to the delayed delivery of census files. In response, bureau employees who conducted computer matching worked overtime hours to make up lost time. Furthermore, A.C.E. regional offices did not receive clusters in the prioritized order that they had requested. The reason for prioritizing the clusters was to provide as much time as possible for field follow-up on clusters in the most difficult areas. Examples of areas that were expected to need extra time were those with staffing difficulties, larger workloads, or expected weather problems. Based on the bureau’s Master Activities Schedule, the delay did not affect the schedule of subsequent matching phases. Also, bureau officials stated that although clusters were not received in prioritized order, field follow-up was not greatly affected because the first clerical matching phase was well staffed and sent the work to regional offices quickly. On the first full day of clerical matching, the bureau identified a programming error in the quality assurance management system, which made some clerks and technicians who had not passed quality assurance reviews appear to have passed. In response, bureau officials manually overrode the system. Bureau officials said the programming error was fixed within a couple of days, but could not explain how the programming error occurred. They stated that the software system used for clerical matching was thoroughly tested, although it was not used in any prior censuses or census tests, including the Dress Rehearsal. As we have previously noted, programming errors that occur during the operation of a system raise questions about the development and acquisition processes used for that system. A programming error caused last names to be printed improperly on field follow-up forms for some households containing multiple last names. In situations in which regional office staff may not have caught the printing error and interviewers may have been unaware of the error—such as when those questionnaires were completed before the problem was discovered— interviews may have been conducted using the wrong last name, thus recording misleading information. According to bureau officials, in response, the bureau (1) stopped printing questionnaires on the date officials were notified about the misprinted questionnaires, (2) provided information to regional offices that listed all field follow-up housing units with multiple names that had been printed prior to the date the problem was resolved, and (3) developed procedures for clerical matchers to address any affected questionnaires being returned that had not been corrected by regional office staff. While resolving the problem, productivity was initially slowed in the A.C.E. regional offices for approximately 1 to 4 days, yet field follow-up was completed on time. Bureau officials inadvertently introduced this error when they addressed a separate programming problem in the software. Bureau officials stated that they tested this software system; however, the system was not given a trial run during the Census Dress Rehearsal in 1998. According to bureau officials, the problem did not affect data quality because it was caught early in the operation and follow-up forms were edited by regional staff. However, the bureau could not determine the exact day of printing for each questionnaire and thus did not know exactly which households had been affected by the problem. According to bureau data, the problem could have potentially affected over 56,000 persons, or about 5 percent of the A.C.E. sample. In addition to the problem printing last names, the bureau experienced other printing problems. According to bureau staff, field follow-up received printed questionnaires that were (1) missing pages, (2) missing reference notes written by clerical matchers, and (3) missing names and/or having some names printed more than once for some households of about nine or more people. According to bureau officials, these problems were not resolved during the operation because they were reported after field follow-up had started and the bureau was constrained by deadlines. Bureau officials stated that they believed that these problems would not significantly affect the quality of data collected or match code results, although bureau officials were unable to provide data that would document either the extent, effect, or cause of these problems. The bureau’s regional offices submitted questionnaires containing an incomplete “geocoding” section. This section was to be used in instances when the bureau needed to verify whether a housing unit (1) existed on Census Day and (2) was correctly located in the A.C.E. sample area. Although the bureau returned 48 questionnaires during the first 6 days of the operation to the regional offices for completion, bureau officials stated that after that they no longer returned questionnaires to the regional offices because they did not want to delay the completion of field follow-up. A total of over 10,000 questionnaires with “geocoding” sections were initially sent to the regional offices. The bureau did not have data on the number, if any, of questionnaires that the regional offices submitted incomplete beyond the initial 48. The bureau would have coded as “unresolved” the persons covered by any incomplete questionnaires. As previously stated, the bureau later imputed the match code results for these records using statistical methods, which could introduce uncertainty into estimates of census over- or undercount rates. According to bureau officials, this problem was caused by (1) not printing a checklist of all sections that needed to be completed by interviewers, (2) no link from any other section of the questionnaire to refer interviewers to the “geocoding” section, and (3) field supervisors following the same instructions as interviewers to complete their reviews of field follow-up forms. However, bureau officials believed that the mistake should have been caught by regional office reviews before the questionnaires were sent back for processing. About a week after the second clerical matching phase began, officials requested an extension, which was granted for 5 days, to complete the second clerical matching phase. According to bureau officials, the operation could have been completed by the November 30, 2000, deadline as planned, but they decided to take extra steps to improve data quality that required additional time. According to bureau officials, the delay in completing person matching had no effect on the final completion schedule, only the start of subsequent A.C.E. processing operations. Matching A.C.E. and census records was an inherently complex and labor- intensive process that often relied on the judgment of trained staff, and the bureau prepared itself accordingly. For example, the bureau provided extensive training for its clerical matchers, generally provided thorough documentation of the process and criteria to be used in carrying out their work, and developed quality assurance procedures to cover its critical matching operations. As a result, our review identified few significant operational or procedural deviations from what the bureau planned, and the bureau took timely action to address them. Nevertheless, our work identified opportunities for improvement. These opportunities include a lack of written documentation showing how cutoff scores were determined and programming errors in the clerical matching software and software used to print field follow-up forms. Without written documentation, the bureau will be less likely to capture lessons learned on how cutoff scores should be applied, in order to determine the impact on clerical matching productivity. Moreover, the discovery of programming errors so late in the operation raises questions about the development and acquisition processes used for the affected A.C.E. computer systems. In addition, one lapse in procedures may have resulted in incomplete geocoding sections verifying that the person being matched was in the geographic sample area. The collective effect that these deviations may have had on the accuracy of A.C.E. results is unknown. Although the bureau has concluded that A.C.E. matching quality improved compared to 1990, the bureau has reported that error introduced during matching operations remained and contributed to an overstatement of the A.C.E. estimate of census undercounts. To the extent that the bureau employs an operation similar to A.C.E. to measure the quality of the 2010 Census, it will be important for the bureau to determine the impact of the deviations and explore operational improvements, in addition to the research it might carry out on other uncertainties in the A.C.E. results. As the bureau documents its lessons learned from the 2000 Census and continues its planning efforts for 2010, we recommend that the secretary of commerce direct the bureau to take the following actions: 1. Document the criteria and the logic that bureau staff used during computer matching to determine the cutoff scores for matched, possibly matched, and unmatched record pairs. 2. Examine the bureau’s system development and acquisition processes to determine why the problems with A.C.E. computer systems were not discovered prior to deployment of these systems. 3. Determine the effect that the printing problems may have had on the quality of data collected for affected records, and thus the accuracy of A.C.E. estimates of the population. 4. Determine the effect that the incomplete geocoding section of the questionnaires may have had on the quality of data collected for affected records, and thus the accuracy of A.C.E. estimates of census undercounts. The secretary of commerce forwarded written comments from the U.S. Census Bureau on a draft of this report. (See appendix II.) The bureau had no comments on the text of the report and agreed with, and is taking action on, two of our four recommendations. In responding to our recommendation to document the criteria and the logic that bureau staff used during computer matching to determine cutoff scores, the bureau acknowledged that such documentation may be informative and that such documentation is under preparation. We look forward to reviewing the documentation when it is complete. In responding to our recommendation to examine system development and acquisition processes to determine why problems with the A.C.E. computer systems were not discovered prior to deployment, the bureau responded that despite extensive testing of A.C.E. computer systems, a few problems may remain undetected. The bureau plans to review the process to avoid such problems in 2010, and we look forward to reviewing the results of their review. Finally, in response to our two recommendations to determine the effects that printing problems and incomplete questionnaires had on the quality of data collected and the accuracy of A.C.E. estimates, the bureau responded that it did not track the occurrence of these problems because the effects on the coding process and accuracy were considered to be minimal since all problems were identified early and corrective procedures were effectively implemented. In our draft report we recognized that the bureau took timely corrective action in response to these and other problems that arose during person matching. Yet we also reported that bureau studies of the 2000 matching process had concluded that matching error contributed to error in A.C.E. estimates without identifying procedural causes, if any. Again, to the extent that the bureau employs an operation similar to A.C.E. to measure the quality of the 2010 Census, it will be important for the bureau to determine the impact of the problems and explore operational improvements as we recommend. We are sending copies of this report to other interested congressional committees. Please contact me on (202) 512-6806 if you have any questions. Key contributors to this report are included in appendix III. To address our three objectives, we examined relevant bureau program specifications, training manuals, office manuals, memorandums, and other progress and research documents. We also interviewed bureau officials at bureau headquarters in Suitland, Md., and the bureau’s National Processing Center in Jeffersonville, Ind., which was responsible for the planning and implementation of the person matching operation. In addition, to review the process and criteria involved in making an A.C.E. and census person match, we observed the match clerk training at the National Processing Center and a field follow-up interviewer training session in Dallas, Tex. To identify the results of the quality assurance procedures used in key person matching phases, we analyzed operational data and reports provided to us by the bureau, as well as extracts from the bureau's management information system, which tracked the progress of quality assurance procedures. Other independent sources of the data were not available for us to use to test the data that we extracted, although we were able to corroborate data results with subsequent interviews of key staff. Finally, to examine how, if at all, the matching operation deviated from what was planned, we selected 11 locations in 7 of the 12 bureau census regions (Atlanta, Chicago, Dallas, Denver, Los Angeles, New York, and Seattle). At each location we interviewed A.C.E. workers from November through December 2000. The locations selected for field visits were chosen primarily for their geographic dispersion (i.e., urban or rural), variation in type of enumeration area (e.g., update/leave or list enumerate), and the progress of their field follow-up work. In addition, we reviewed the match code results and field follow-up questionnaires from 48 sample clusters. These clusters were chosen because they corresponded to the local census areas we visited and contained records reviewed during every phase of the person matching operation. The results of our field visits and our cluster review are not generalizable nationally to the person matching operation. We performed our audit work from September 2000 through September 2001 in accordance with generally accepted government auditing standards. In addition to those named above, Ty Mitchell, Lynn Wasielewski, Steven Boyles, Angela Pun, J. Christopher Mihm, and Richard Hung contributed to this report. The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full-text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO E-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily e-mail alert for newly released products” under the GAO Reports heading. Web site: www.gao.gov/fraudnet/fraudnet.htm, E-mail: [email protected], or 1-800-424-5454 or (202) 512-7470 (automated answering system). | The U.S. Census Bureau conducted the Accuracy and Coverage Evaluation (ACE) survey to estimate the number of people missed, counted more than once, or otherwise improperly counted in the 2000 Census. On the basis of uncertainty in the ACE results, the Bureau's acting director decided that the 2000 Census tabulations should not be adjusted in order to redraw the boundaries of congressional districts or to distribute billions of dollars in federal funding. Although ACE was generally implemented as planned, the Bureau found that it overstated census undercounts because of an error introduced during matching operations and other uncertainties. The Bureau concluded that additional review and analysis of these uncertainties would be needed before the data could be used. Matching more than 1.4 million census and ACE records involved the following four phases, each with its own matching procedures and multiple layers of review: computer matching, clerical matching, field follow-up, and clerical matching. The Bureau applied quality assurance procedures to each phase of person matching. Because the quality assurance procedures had failure rates of less than one percent, the Bureau reported that person matching quality assurance was successful at minimizing errors. Overall, the Bureau carried out person matching as planned, with few procedural deviations. GAO identified areas for improving future ACE efforts, including more complete documentation of computer matching decisions and better assurance that problems do not arise with the bureau's automated systems. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The C-17 is being developed and produced by McDonnell Douglas. The Congress has authorized procurement of 40 C-17 aircraft through fiscal year 1996. As of October 1, 1995, McDonnell Douglas had delivered 22 production aircraft to the Air Force. In November 1995, the Department of Defense (DOD) announced plans to buy an additional 80 C-17 aircraft. In addition to procuring the aircraft, the Air Force is purchasing spare parts to support the C-17. The Air Force estimates the total cost for initial spares—the quantity of parts needed to support and maintain a weapon system for the initial period of operation—for the first 40 C-17s to be about $888 million. In January 1994, we reported that the Air Force had frequently ordered C-17 spare parts prematurely. We noted that premature ordering occurred because the Air Force used inaccurate and outdated information, bought higher quantities than justified, or did not follow regulations governing the process. As a result, DOD revised its guidance to limit the initial procurement of spares, and the Air Force canceled orders for millions of dollars of C-17 parts. Initial spares for the C-17 are being procured under two contracts. Some are being provided under the C-17 development contract through interim contractor support. That support, which started in mid-1993, involves providing spares and technical support for two C-17 squadrons through June 1996. As of May 31, 1995, the Air Force had spent about $198 million for interim contractor support. The remaining initial spares are being procured under contract F33657-81-C-2109 (referred to in this report as contract-2109). Under this contract, the Air Force, as of May 31, 1995, had obligated $120 million for initial spares, but negotiated prices for only about $29 million of the spares. The $91 million balance was the amount obligated for parts ordered on which prices had not been negotiated. McDonnell Douglas produces some spare parts in its facilities at the Transport Aircraft Division at Long Beach, California, where the C-17 is being produced, or at other locations, such as its Aerospace-East Division at St. Louis. It also subcontracts for the production of parts. The subcontractors may be responsible for all aspects of part production or McDonnell Douglas may furnish materials or complete required work. The Air Force paid higher prices for 33 spare parts than appears reasonable when compared to McDonnell Douglas’ historical costs. The 33 spare parts were ordered under contract-2109 and manufactured by McDonnell Douglas’ St. Louis Division. The Long Beach Division had previously purchased them from subcontractors for production aircraft at much lower costs. The St. Louis Division’s estimated costs were from 4 to 56 times greater than the prices that Long Beach had paid outside vendors several years earlier. The parts were in sections of the C-17 assembled by the Long Beach Division for the first four aircraft, but assembled by the St. Louis Division for subsequent aircraft. For 10 parts, McDonnell Douglas had previously purchased the complete part from a subcontractor. For the other 23 parts, it had furnished material to a subcontractor that manufactured the part. While our examination of price increases was limited to 33 spare parts, an Air Force-sponsored should-cost review identified potential savings of $94 million for the C-17 program if work is moved from McDonnell Douglas’ St. Louis Division to outside vendors or other McDonnell Douglas facilities. Air Force officials said that the $94 million savings related only to components for production aircraft. They said that the savings would be higher if spare parts were included. We identified 10 parts—7 hinges on the air inlet door to the C-17’s air conditioning system, 2 cargo door hooks, and a door handle on the C-17’s vertical stabilizer access door—that McDonnell Douglas had previously purchased complete from a subcontractor at much lower costs. Information on previous purchase costs, McDonnell Douglas’ manufacturing costs, and the price that the Air Force paid for each of these spare parts are included in appendix I. Details on one of the hinges follow. The Air Force paid $2,187 for one hinge on the air inlet door to the C-17’s air conditioning system. The hinge (see fig. 1) is aluminum, about 4 inches long, 2 inches wide, and ranges from about 1/16 of an inch to 1-3/8 inches thick. The Long Beach Division, which assembled the air conditioning inlet door for initial production, purchased 14 of these hinges from a subcontractor in 1988 for use on production aircraft at $30.60 each. It had also paid the vendor $541 for first article inspection and $2,730 for reusable special tooling. These costs, however, would not have been incurred on future orders. In 1992, McDonnell Douglas transferred the air conditioning inlet door assembly work to its St. Louis Division and that division made the hinge for production aircraft and for the spare part order. The estimated cost for the spare hinge was $1,745, and, with overhead, profit, and warranty factors, the Air Force paid $2,187 for it. The fact that the subcontractor had made the hinge from a special casting while the St. Louis Division machined the hinge from bar stock could be one cause of the higher price. We identified 23 parts—21 different cargo door hooks and 2 different hinge assemblies—where McDonnell Douglas had previously furnished material to a subcontractor who produced the parts at much lower costs. Information on previous purchase costs and McDonnell Douglas manufacturing costs are included in appendix II. Details on one of the door hooks follow. The Air Force paid $12,280 for one of the hooks. The hook (see fig. 2) is made of steel and is about 7 inches high, 3-1/2 inches wide, and about 4-1/2 inches thick. For the early production aircraft, the Long Beach Division had furnished material valued at $715 to an outside vendor in 1992 who manufactured this hook for $389 (exclusive of the material value). After initially using hooks for production aircraft provided from the Long Beach Division’s inventory, the St. Louis Division made them starting with production aircraft number 12. For the spares order under contract-2109, the St. Louis Division estimated “in-house” manufacturing costs (exclusive of material costs) at about $8,842. McDonnell Douglas officials said that the primary reason for moving various work from the Long Beach Division to the St. Louis Division was to recover from being behind schedule and that sufficient time was not available to procure parts from vendors. McDonnell Douglas officials also said that now that production deliveries are on schedule, they will be reviewing parts to identify the most affordable and effective manufacturing source and that 17 of the 33 parts have been identified as candidates to move out of St. Louis to achieve lower C-17 costs. DOD advised us that DPRO officials at McDonnell Douglas had estimated the cost difference between production by McDonnell Douglas versus subcontractors for the 33 parts to be $141,000 and, after further analysis,had determined that $65,000 was excessive. McDonnell Douglas refunded that amount in December 1995. Our review of the data submitted to support the pricing of selected spare parts orders showed that McDonnell Douglas’ St. Louis Division used outdated pricing information when proposing costs under intercompany work orders with the Long Beach Division for the C-17 spares. The St. Louis division used labor variance factors based on the second quarter of 1992 for proposing labor hours required for items produced in 1994. Most of these orders were negotiated with DCMC in mid-1994. As of May 31, 1995, DCMC had negotiated prices for 95 contract items made by the St. Louis Division with a total negotiated value of about $966,000. We reviewed data for 37 of these items with a negotiated total value of $347,000. We reviewed only labor variance factors and did not address other rates and factors such as the miscellaneous production factor. We found that the selected items were overpriced by $117,000, or about 34 percent of the negotiated value of the items reviewed. For example, McDonnell Douglas, in developing the basic production labor hours estimate for a hinge assembly multiplied machine shop “target” hours by a variance factor of 2.33 and sheet metal target hours by a variance factor of 2.5. Data for the first quarter of 1994 showed a conventional machine shop variance of 1.26 and a sheet metal variance of 1.60. Because most work for this item took place in the first half of 1994 and the prices were negotiated in June 1994, the 1994 variance rates should have been used for pricing the item. Instead, McDonnell Douglas used rates based on the second quarter of 1992, which were higher. A price of $42,587 was negotiated based on the 1992 data. Using the data for the first quarter of 1994, the price would have been $26,458, a difference of $16,129, or about 38 percent lower than the negotiated price. After we brought these issues to the attention of DOD officials, they acknowledged that more current labor variance data should have been used and sought a refund. McDonnell Douglas made a refund of $117,000 in December 1995. Our review indicated that the profits awarded for some orders under contract-2109 appear higher than warranted. DFARs requires the use of a structured approach for developing a government profit objective for negotiating a profit rate with a contractor. The weighted guidelines approach involves three components of profit: contract type risk, performance risk, and facilities capital employed. The contracting officer is required to assess the risk to the contractor under each of the components and, based on DFARs guidelines, calculate a profit objective for each one and, thus, an overall profit objective. As a general matter, the greater the degree of risk to the contractor, the higher the profit objective. For example, the profit objective for a fixed-price contract normally would be higher than that for a cost-type contract because the cost risk to the contractor is greater under the former. Consequently, in its subsequent price negotiations, the government normally will accept a higher profit rate when a contractor is accepting higher risks. The price of spare orders under contract-2109 were to be negotiated individually. However, rather than calculate separate profit objectives and negotiate profit rates for individual orders, DPRO and McDonnell Douglas negotiated two predetermined profit rates, documented in a memorandum of agreement, that would apply to subsequent pricing actions. The profit rates were 10 percent for parts that McDonnell Douglas purchased from subcontractors, and 15 percent for spare parts that McDonnell Douglas manufactured. Our review indicates that the use of these rates for many later-priced spares resulted in higher profits for the contractor than would have been awarded had objectives been calculated and rates negotiated when the orders actually were priced. Based on profit rates of 6 percent for purchased parts and 13 percent for parts made in-house, both of which could have been justified according to our calculations, McDonnell Douglas would have received less profit. For example, applying these lower profit rates to the $29 million of negotiated spare part orders as of May 31, 1995, would have reduced the company’s profit by $860,000. After we presented our information in October 1995, DCMC directed that the memorandum of agreement, which was scheduled to either expire or be extended on November 1, 1995, be allowed to expire and that future profit objectives be established on an order-by-order basis. DOD officials agreed that a single profit analysis should not be used for C-17 spare parts. In developing a profit objective for contract-2109, the contracting officer assigned a value for contract type risk based on firm, fixed-price contracts. However, negotiations of prices for spare part orders were conducted, in many cases, after the vendor or McDonnell Douglas had incurred all costs and delivered the spares. These conditions lowered the contractor’s risk for those parts far below what normally would be expected for a firm, fixed-price contract. The risks were more like those that exist for cost-type contracts, for which the weighted guidelines provide lower profit objective values. Of the 40 parts made in-house that we reviewed, McDonnell Douglas had delivered 25 (63 percent) of the parts at the time of price negotiations with the government. Five of the remaining 15 items were delivered during the month of price negotiations, and all were delivered within 3-1/2 months of price negotiations. Of the 55 “buy” spare parts we reviewed, McDonnell Douglas had established prices with its vendor for 45 (82 percent) of the parts. Using one order as an example, McDonnell Douglas (1) negotiated spare parts prices with its subcontractor on January 25, 1993; (2) negotiated prices with the government on April 11, 1994; and (3) scheduled the parts for delivery on May 27, 1994. Thus, for both make and buy items, a substantial portion of the contractor’s costs had been known at the time of the price negotiations. Section 217.7404-6 of DFARs requires that profit allowed under unpriced contracts reflect the reduced risk associated with contract performance prior to negotiations. Consistent with this requirement, the weighted guidelines section (215.971-3) requires the contracting officer to assess the extent to which costs have been incurred prior to definitization of a contract action and assure profit is consistent with contractor risk. In fact, the guidelines provide that if a substantial portion of the costs has been incurred prior to definitization, the contracting officer may assign a contract type risk value as low as zero, regardless of contract type. A DPRO representative said that, in negotiating the memorandum of understanding, DPRO knew that the two profit rates for later application would not be perfect in every case. He said, however, that they were expected to be off in one direction as often as in the other, creating an overall fair agreement. The representative noted, for example, that while deliveries for the orders we reviewed were near the negotiation dates, the memorandum’s rates also would apply to orders with deliveries more than 2 years in the future, where minimal costs have been incurred. In addition, the representative stated that a significant number of parts would be undergoing design changes because a baseline configuration for the C-17 did not exist. The representative explained that McDonnell Douglas is responsible for replacing spares affected by design changes until 90 days after reliability, maintainability, and availability testing, which was completed on August 5, 1995, and that any additional cost for such replacements would have to be absorbed by McDonnell Douglas. Finally, the representative noted that the minimal cost history on C-17 spares would indicate a higher than normal contract type risk. We have no evidence to support the DPRO official’s view that profits based on the rates in the memorandum of agreement would balance out over time. In fact, DCMC let the agreement lapse and will calculate profit objectives and negotiate profit rates on an order-by-order basis. In addition, we noted that McDonnell Douglas initially received a 2-percent warranty fee on contract-2109 orders to cover both the risk of design changes and provide a standard 180-day commercial warranty. Furthermore, the profit agreement stated that McDonnell Douglas could submit additional warranty substantiation at any time and, if the data supported a different percent for warranty, the government would consider adjusting the percentage. Thus, the warranty fee is the contract mechanism the parties agreed to use to address the risks of replacement parts because of design changes. The contracting officer, in developing a profit objective for buy orders (complete spare parts purchased from an outside vendor) under contract-2109, used a higher rate for performance risks than was warranted. The DFARs’ weighted guidelines provide both standard and alternate ranges for the contracting officer to use in calculating performance risk, which is the component of profit objective that addresses the contractor’s degree of risk in fulfilling the contract requirements. The standard range applies to most contracts, whereas the higher alternate is for research and development and service contracts that involve low capital investment in buildings and equipment. The guidelines provide that if the alternate range is used, the contracting officer should not give any profit for the remaining component, facilities capital employed, which focuses on encouraging and rewarding aggressive capital investment in facilities that benefit DOD. DCMC officials said that the alternate range was used in calculating the performance risk component on contract-2109 because McDonnell Douglas’ system could not provide an estimate to be used for purposes of calculating the facilities capital component. DPRO officials said that since the negotiation, McDonnell Douglas has developed the means to estimate facilities capital employed on its spares proposals. They said that using the standard range for performance risk and including facilities capital employed for spares orders yields a profit objective that is substantially the same as the profit objective calculated using the alternate range for performance risk. DOD concurred that DPRO should not have utilized the alternate range for performance risk, but repeated the DPRO’s assertion that using the standard range and including facilities capital employed yields essentially the same results. We reviewed DCMC’s data and found that using the alternate range for the performance risk component does not result in a substantially similar profit objective to that calculated by applying a factor for facilities capital employed. The contracting officer’s use of the alternate range for performance risk, combined with the use of a fixed-price value for contract type, led to the negotiation of a profit rate of 10 percent for the buy orders; in contrast, we calculated that using a cost-type contract risk factor, the standard range for performance risk, and McDonnell Douglas’ estimate of facilities capital employed would have resulted in an overall profit objective of 6 percent for the buy orders. In commenting on a draft of this report, DOD said that it had taken appropriate action to address our finding of overpricing. In addition to recovering $182,000, DOD indicated that DPRO at McDonnell Douglas will now screen all spares orders containing items to be made in-house to (1) look for possible conversion to buy items and (2) ensure that labor data is correct for all items made in the St. Louis Division. Moreover, DOD stated that DPRO no longer relies on a single profit analysis and, by completing a separate analysis for each order, DPRO will address the contract risk associated with each order. DOD acknowledged that it is possible to take issue with the contracting officer’s selection of risk factors and that DPRO should not have used the alternative range for performance risk in its profit analysis. However, DOD asserts that it would be misleading to infer that unjustified profits were paid to the contractor. We do not infer that the contractor received $860,000 in unjustified profits. Determining the appropriate amount of profits is a matter to be negotiated between DPRO and the contractor. However, we noted that (1) lower rates were justified under the weighted guidelines and (2) rates of 6 percent for purchased parts and 13 percent for parts made in-house could be justified. While the results of our review cannot be be projected to all C-17 spare parts, using the lower profit rates for the $29 million of negotiated spare parts orders as of May 31, 1995, would have reduced the company’s profit by $860,000. Our subsequent analysis raises some questions about the DOD statement that DPRO, by making a separate profit analysis for each order, will address the contract type risk associated with each order. Our review of an order negotiated in January 1996 based on a separate profit analysis indicated that the DPRO’s profit analysis continues to not reflect the reduced risk when most costs have been incurred prior to price negotiations. While the negotiated profit rate was 8.6 percent, or 1.4 percent lower than the previously negotiated rate, the amount of profit allowed for contract type risk continues to appear higher than justified by the weighted guideline and DFARs. In this regard, DPRO noted that McDonnell Douglas’ cost “amounts to only 46 hundreths of one percent” and “you are being paid all your costs and the parts have already been shipped, thereby reducing your risk to a very low degree.” However, the contract risk factors were at the midpoint range and higher for a firm, fixed-price contract. The stated reason for this was that the design could change, necessitating a recall. While DPRO discontinued using the memorandum of understanding profit rates, we remain concerned that the negotiated profit rates may not reflect the reduced contract type risk when essentially all costs have been incurred. DOD’s comments are reprinted in their entirety in appendix III. To select spare parts for our review, we analyzed reports developed by McDonnell Douglas’ data system that included historical and current information on spare parts orders—for example, the negotiation date, negotiation amount, and delivery date on current/previous orders. For our review, we only considered spare parts orders for which prices had been negotiated as of May 31, 1995. As of that date, prices for orders involving 696 spare parts had been negotiated, with a value of about $29 million. We selected spare parts for a more detailed review based on current/previous cost, intrinsic value, and nomenclature. Our selection of parts was judgmental and our results cannot be projected to the universe of C-17 parts. We reviewed the contractor’s and the DPRO’s contract and pricing files, and discussed the pricing issues with selected contractor and DCMC officials. As a result of rather significant cost increases for a number of spare parts that had the manufacturing/assembly effort transferred to the contractor’s plant in St. Louis, we obtained additional documentation from the contractor’s plant in St. Louis and DPRO. We reviewed the DFARs guidance relating to the use of weighted guidelines in establishing a profit objective. We also reviewed the memorandum of agreement that was negotiated by DPRO for contract-2109 and discussed the basis for the negotiated profits with DOD and DPRO officials. In assessing the value assigned to contract type risk, we reviewed data on 95 spare parts with a total negotiated price of about $3 million out of 696 spare parts with a total negotiated price of about $29 million, or about 14 percent of the parts. Our review of selected spare parts cannot be projected to all C-17 spare parts. However, to illustrate the potential effect of lower profit rates, we calculated a potential reduction using spare parts orders negotiated as of May 31, 1995. We conducted our review between November 1994 and September 1995 in accordance with generally accepted government auditing standards. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies to the Secretaries of Defense and the Air Force; the Director, Office of Management and Budget; and other interested parties. We will make copies available to others upon request. If you or your staff have any questions about this report, please contact me on (202) 512-4841. The major contributors were David Childress, Larry Aldrich, Kenneth Roberts, and Larry Thomas. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the pricing of certain spare parts for the C-17 aircraft, focusing on those spare parts that experienced significant price increases when McDonnell Douglas decided to produce them in-house rather than purchase them from outside vendors. GAO found that: (1) GAO's review indicates that the Air Force paid higher prices for spare parts than is justified; (2) for 33 selected spare parts formerly procured under subcontracts, costs are from 4 to 56 times higher after McDonnell Douglas moved the work in-house; (3) for example, McDonnell Douglas paid an outside vendor $389 to machine a door hook that it subsequently machined in-house at its St. Louis Division at an estimated cost of $8,842; (4) costs for some spare parts are higher than justified because McDonnell Douglas used outdated pricing data that overstated its proposed prices; (5) in developing the proposed costs of selected spare parts, McDonnell Douglas used outdated labor variance factors, which resulted in prices being overstated by 34 percent ($117,000) for 37 parts; (6) the profits awarded on some orders under contract-2109 appear higher than warranted; (7) the contracting officer used Defense Federal Acquisition Regulation Supplement guidelines to calculate profit objectives and negotiate profit rates with the contractor that are documented in a memorandum of agreement; (8) the contracting officer developed the government's objectives based on the risks of a fixed-price contract; (9) however, most costs were known when the order prices were negotiated; therefore, the contractor's risks were lower than in a fixed-price environment; (10) also, the contracting officer used a higher performance risk factor than appears appropriate when McDonnell Douglas is buying spare parts from subcontractors; and (11) based on profit rates that GAO's calculations suggest could have been justified, McDonnell Douglas would have received less profit. GAO also found that: (1) as GAO discussed its findings with Department of Defense (DOD) officials during GAO's review, they began taking actions to address those findings; (2) for example, the Defense Contract Management Command's Defense Plant Representative Office at McDonnell Douglas calculated that the overpricing of spare parts was $182,000 and recovered that amount from McDonnell Douglas in December 1995; and (3) DOD stated that other actions are being taken to prevent these overpricing problems on other spare parts. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Unlike conventional, or subtractive, manufacturing processes—such as drilling or milling—that create a part or product by cutting away material from a larger piece, additive manufacturing builds a finished piece in successive layers, generally without the use of molds, casts, or patterns. Additive manufacturing can potentially lead to less waste material in the manufacturing process, as shown in figure 1. ASTM International, an international standards development organization, has identified seven categories of additive manufacturing processes to group the different types of technologies used, as shown in table 1. According to DOD officials, the first six of the categories described are the ones of greatest use to DOD. In August 2012, as part of a presidential initiative focused on advanced manufacturing, America Makes—the National Additive Manufacturing Innovation Institute—was established as a public-private partnership between federal government agencies (including DOD), private industry, and universities to collaboratively address additive manufacturing challenges; accelerate the research, development, and demonstration of additive manufacturing; and transition that technology to the U.S. manufacturing sector. According to the government program manager of America Makes, funding to establish America Makes consisted of a federal government investment of $55 million (fiscal years 2012 through 2017), and it is managed by the U.S. Air Force Research Laboratory. The official also stated that America Makes receives additional funding through publicly and privately funded projects. Multiple DOD components—at the OSD, military department (Army, Navy, and Air Force), Defense Logistics Agency, and Defense Advanced Research Projects Agency levels—are involved in additive manufacturing efforts. At the OSD-level, the Office of the Assistant Secretary of Defense for Research and Engineering develops policy and provides guidance for all DOD activities on the strategic direction for defense research, development, and engineering priorities and coordinates with the Office of the Deputy Assistant Secretary of Defense for Manufacturing and Industrial Base Policy to leverage independent research and development activities, such as additive manufacturing research activities. The Defense Advanced Research Projects Agency’s Defense Sciences Office and the military departments—through the U.S. Army Research, Development and Engineering Command (RDECOM); the Office of Naval Research; and the U.S. Air Force Research Laboratory—have laboratories to conduct additive manufacturing research activities. According to Navy officials, the military depots use additive manufacturing for a variety of applications using various material types. These efforts largely include polymer, metal, and ceramic-based additive manufacturing processes for rapid prototyping, tooling, repair, and development of non- critical parts. The DOD components lead and conduct activities related to several types of technology research and development and advancements. Additive manufacturing is one of these activities, and the components are involved to the extent that some of the broader activities include additive manufacturing. See appendix II for a more detailed description of the key DOD components involved with additive manufacturing. In October 2014, with the assistance of the National Academies, we convened a forum of officials from federal government agencies, including DOD; private-sector organizations; academia; and non-governmental organizations to discuss the use of additive manufacturing for producing functional parts, including opportunities, key challenges, and key considerations for any policy actions that could affect the future use of additive manufacturing for producing such parts. In June 2015 we issued a report summarizing the results of that forum. During the forum, participants noted that the use of additive manufacturing has produced benefits such as reduced time to design and produce functional parts; the ability to produce complex parts that cannot be made with conventional manufacturing processes; the ability to use alternative materials with better performance characteristics; and the ability to create highly customized, low-volume parts. Furthermore, forum participants identified as a key challenge the need to ensure the quality of functional parts—for example, ensuring that manufacturers can repeatedly make the same part and meet precision and consistency performance standards on both the same machine and different machines. During the forum, participants also indicated that before a product can be certified, manufacturers must qualify the materials and processes used to make the part, which involves manufacturers conducting tests and collecting data under very controlled conditions. For example, DOD requires that parts it purchases, such as aircraft engine parts, meet specific standards or performance criteria. Manufacturers might need to have these parts certified to meet DOD’s standards. According to participants in the forum, the National Institute of Standards and Technology is funding research to provide greater assurance with regard to the quality of parts produced using additive manufacturing. It is also leading efforts on additive manufacturing standards through ASTM International’s committee on additive manufacturing, which was formed in 2009. Participants also identified some future applications for additive manufacturing, including the construction of tooling for conventional manufacturing lines, for enhancing education, and for enhancing supply chain management. DOD in its May 2014 briefing document on additive manufacturing addressed the three directed elements: that is, (1) potential benefits and constraints of additive manufacturing; (2) how the additive manufacturing process could or could not contribute to DOD missions; and (3) what technologies being developed at America Makes are being transitioned for DOD use. In summary, we found the following: First, the briefing document noted potential benefits and constraints. For example, DOD noted a potential benefit to be derived in some cases from additive manufacturing yielding lighter parts for use in aircraft, for instance; thereby potentially lowering fuel costs. DOD also noted a potential constraint reflected in the fact that DOD has yet to establish qualification and certification protocols for additively manufactured parts. Second, the briefing document noted potential contributions to DOD’s mission. For example, DOD noted that additive manufacturing may be capable of producing equivalent replacements for obsolete parts. Third, the briefing document identified America Makes projects that DOD anticipated would be transitioned for DOD use. For example, DOD noted a collaborative effort involving Pennsylvania State University’s Applied Research Lab, Pratt & Whitney, Lockheed Martin, and General Electric Aviation on thermal imaging for process monitoring and control of additive manufacturing. DOD noted that this project would help enable DOD to ensure process and part repeatability, and would reduce the costs and time for post-process inspection. As shown in table 2, the DOD briefing document noted additional examples of potential benefits and constraints; potential contributions to DOD’s mission; and some other America Makes projects that DOD anticipates will be transitioned for its own use. DOD has taken steps to implement additive manufacturing to improve performance and combat capability, as well as achieve associated cost savings. We obtained information on multiple efforts being conducted across DOD components. For example, the Army used additive manufacturing, instead of conventional manufacturing, to prototype aspects of a Joint Service Aircrew Mask to test a design change, and it reported thousands of dollars saved in design development and potential combat capability improvements. According to a senior Navy official, to improve performance, the Navy additively manufactured circuit card clips for servers on submarines, as needed, because the original equipment manufacturer no longer produced these items. This official also stated that the Navy is researching ways to produce a flight critical part by 2017. According to a senior Air Force official, the Air Force is researching potential performance improvements that may be achieved by embedding devices such as antennas within helmets through additive manufacturing that could enable improved communications. According to Defense Logistics Agency officials, they have taken steps to implement the technology by additively manufacturing the casting cores for blades and vanes used on gas turbine engines. According to a senior Walter Reed National Military Medical Center official, the Center has used additive manufacturing to produce cranial implants for patients. See additional information on DOD’s additive manufacturing efforts below, listed by component. DOD uses additive manufacturing for design and prototyping and for some production—for example, parts for medical applications— and it is conducting research to determine how to use the technology for new applications, such as printing electronic components for circuitry and antennas. DOD is also considering ways in which it can use additive manufacturing in supply chain management, including for repair of equipment and production of parts in the field so as to reduce the need to store parts; for production of discontinued or temporary parts as needed for use until a permanent part can be obtained; and for quickly building parts to meet mission requirements. According to DOD officials, such usage will enable personnel in the field to repair equipment, reduce equipment down-time, and execute their missions more quickly. The U.S. Army RDECOM Armament Research, Development and Engineering Center, according to Army officials, plans to achieve performance improvements by developing an additively manufactured material solution for high demand items such as nuts and bolts, providing the engineering analysis and qualification data required to make these parts by means of additive manufacturing capability at the point of need in theater. These officials stated that this solution could potentially reduce the logistics burden on a unit and improve its mission readiness, thus enabling enhanced performance. The U.S. Army RDECOM Armament Research, Development and Engineering Center, in conjunction with the Defense Logistics Agency, evaluated high-demand parts in the Afghanistan Theater of Operations and determined that nuts and bolts were high demand parts that were often unavailable due to the logistical challenges of shipping parts. According to Army officials, additive manufacturing offers customers the opportunity to enhance value when the lead time needed to manufacture and acquire a part can be reduced. According to these officials, in military logistics operations in theater, the manufacture of parts to reduce the lead time to acquire a part is of paramount importance. As of August 2015 the Center had additively manufactured several nuts and bolts to demonstrate that they can be used in equipment (see figure 2), and it plans to fabricate more of these components for functional testing and qualification. The officials also stated that this testing will verify that the additively manufactured components can withstand the rigors of their intended applications. The U.S. Army RDECOM Edgewood Chemical Biological Center prototyped aspects or parts of a Joint Service Aircrew Mask ( as shown in figure 3) via additive manufacturing to test a design change, which officials stated has resulted in thousands of dollars saved and potential combat capability improvements. A new mask ensemble was built using these parts and was worn by pilots to evaluate comfort and range of vision. Once confirmed, the parts were produced using conventional manufacturing. Since this example was one in a prototyping phase, only low quantities were needed for developmental testing, and additive manufacturing combined with vacuum silicone/urethane casting allowed the Army to obtain a quantity of parts that was near production level. According to Army officials, if conventional production level tools (also called injection molds) had been developed and used in this prototyping phase, costs might have ranged from $30,000-$50,000, with a 3- to 6-month turnaround. These officials stated that additive manufacturing and urethane casting comprised a fraction of the cost—approximately $7,000–$10,000— with a 2- to 3-week turnaround. Had the Army alternatively developed a production tool at this proof-of-concept phase, time and financial investment might have been wasted if the concept had to be changed or started over from the beginning of the design phase, according to the officials. The U.S. Army RDECOM Edgewood Chemical Biological Center achieved combat capability improvements by designing holders (as shown in figure 4), through additive manufacturing, to carry pieces of sensor equipment in the field, according to Army officials. The Center coordinated with the U.S. Army Research Laboratory to develop the holder to carry a heavy hand-held improvised explosive device detection sensor. According to Army officials, the lab wanted a holder that would cradle the handle so as to distribute more weight to the soldier’s vest and back rather than confining it to the soldier’s forearm. Officials at the Center stated that they had additively manufactured many prototypes that were tested by soldiers at various locations around the country within 1 to 2 weeks. According to Army officials, after achieving positive testing results the Center used additive manufacturing to produce the molds that otherwise would have added weeks or months to the process via conventional manufacturing. The final products—10,000 plastic holders—were then produced at the Center through conventional manufacturing. The Army Rapid Equipping Force achieved combat capability improvements by using additive manufacturing, as part of its expeditionary lab capability, to design valve stem covers for a military vehicle, according to Army officials. An Army unit had experienced frequent failures due to tire pressure issues on its Mine- Resistant Ambush Protected vehicles caused by exposed valve stems; for example, during missions, the tires would deflate when the valve stem was damaged by rocks or fixed objects. The additive manufacturing interim solution was developed in just over 2 weeks, because the additive manufacturing process allowed them to prototype a solution more quickly, according to Army Rapid Equipping Force officials. As shown in figure 5, the Army additively manufactured prototypes for versions 1 through 4 of the covers before a final part was produced in version 5 through conventional manufacturing processes. The Army Rapid Equipping Force also achieved combat capability improvements, through its expeditionary lab, by producing prototypes of mounting brackets using additive manufacturing, according to Army officials. Army soldiers using mine detection equipment required illumination around the sensor sweep area during low visibility conditions in order to avoid impact with unseen objects resulting in damage to the sensor. Using additive manufacturing, a mounting bracket was prototyped for attaching flashlights to mine detectors in several versions, as shown in figure 6. According to Army officials, due to requests exceeding the expeditionary lab’s production capability, the Army coordinated with a U.S. manufacturer to additively manufacture 100 mounting brackets at one-fourth the normal cost. Tobyhanna Army Depot achieved performance improvement by using additive manufacturing to produce dust caps for radios, according to Army officials, as shown in figure 7. These officials stated that a shortage of these caps had been delaying the delivery of radios to customers. Getting the part from a vendor would have taken several weeks, but the depot additively manufactured 600 dust caps in 16 hours. According to the depot officials, the dollar savings achieved were of less importance than the fact that they were able to meet their schedule. The Navy is increasingly focused on leveraging additive manufacturing for the production of replacement parts to improve performance, according to Navy officials. When the original equipment manufacturer was no longer producing these parts, the Navy used additive manufacturing to create a supply of replacement parts to keep the fleet ready. This was the case for the Naval Undersea Warfare Center-Keyport, which used additive manufacturing to replace a legacy circuit card clip for servers installed on submarines, as needed (see figure 8). The Navy installed a 3D printer aboard the USS Essex to demonstrate the ability to additively develop and produce shipboard items such as oil reservoir caps, drain covers, training aids, and tools to achieve performance improvements, according to a senior Navy official (see figure 9). According to Navy officials, additive manufacturing is an emerging technology and shipboard humidity, vibration, and motion may create variances in the prints. Navy officials also stated that while there is not a structured plan to install printers on all ships, it is a desired result and vision to have the capability on the fleet. These officials stated that the Navy plans to install 3D printers on two additional ships. The U.S. Air Force Research Laboratory, according to a senior Air Force official, is researching potential performance improvements that may be achieved by (1) additive manufacturing of antennas and electronic components; and (2) embedding devices (such as antennas) within helmets and other structures through additive manufacturing, as shown in figure 10, thereby potentially enabling improved communication. The laboratory has a six-axis printing system that has demonstrated the printing of antennas on helmets and other curved surfaces, according to the official. The official also stated that the laboratory conducts research and development in materials and manufacturing in order to advance additive manufacturing technology such that it can be used affordably and confidently for Air Force and DOD systems. Additionally, according to Air Force officials, the Air Force sustainment organizations use additive manufacturing for tooling and prototyping. According to the December 2014 DOD Manufacturing Technology document, the Defense Logistics Agency projected cost savings of 33-50 percent for additively manufacturing casting core tooling, as shown in figure 11. The Defense Logistics Agency—working with industry, including Honeywell, and leveraging the work of military research labs—helped refine a process to additively manufacture the casting cores for engine airfoils (blades and vanes) used on gas turbine engines, according to Defense Logistics Agency officials. According to these officials, printing these casting cores will help reduce the cost and production lead times of engine airfoils, especially when tooling has been lost or scrapped or when there are low quantity orders for legacy weapon systems. The Walter Reed National Military Medical Center achieved performance improvements by additively manufacturing items that include customized cranial plate implants and medical tooling and surgical guides, according a senior official within the Center. According to the official, additive manufacturing offers a more flexible and applicable solution to aid surgeons and provide benefits to patients. Since 2003, according to the official, the Walter Reed National Military Medical Center has additively manufactured more than 7,000 medical models, more than 300 cranial plates, and more than 50 custom prosthetic and rehabilitation devices and attachments, as well as simulation and training models. The official stated that using additive manufacturing enables each part to be made specifically for the individual patient’s anatomy, which results in a better fit and an implant that is more structurally sound for a longer period of time, which, in turn, leads to better medical outcomes with fewer side effects. Furthermore, the official stated that additive manufacturing has been used for producing patient-specific parts, such as cranial implants, in 1 to 5 days, and these parts are being used in patients. See figure 12. DOD uses various mechanisms to coordinate on additive manufacturing efforts, but it does not systematically track components’ efforts department-wide. DOD components share information regarding additive manufacturing through mechanisms such as working groups and conferences that, according to DOD officials, provide opportunities to discuss challenges experienced in implementing additive manufacturing—for example, qualifying materials and certifying parts. However, DOD does not systematically track additive manufacturing efforts, to include (1) all projects, henceforth referred to as activities, performed and resources expended by DOD; and (2) results of their activities, including actual and potential performance and combat capability improvements, cost savings, and lessons learned. DOD has not designated a lead or focal point at the OSD level to systematically track and disseminate the results of these efforts, including activities and lessons learned, department-wide. Without designating a lead to track information on additive manufacturing efforts, which is consistent with federal internal control standards, DOD officials may not obtain the information they need to leverage ongoing efforts. DOD components use various mechanisms to coordinate information on successes and challenges of additive manufacturing along with other aspects of additive manufacturing. These mechanisms include coordination groups, DOD collaboration websites (such as milSuite), conferences, and informal meetings to coordinate on additive manufacturing-related efforts. Some of these groups or meetings focus on broad issues, such as manufacturing technologies in general (in which additive manufacturing may be included), and others focus solely on additive manufacturing. Participants in these groups have included officials from OSD, the military departments, other governmental agencies, private industry, and universities that support the research and development and operational use of additive manufacturing. DOD officials explained that these groups and conferences provide opportunities to discuss challenges experienced by the components in implementing additive manufacturing, such as the challenges of qualifying materials and certifying parts, and to discuss the efforts they are making to address these challenges, as well as other aspects of additive manufacturing. See table 3 for examples of eight coordination groups we identified that meet to discuss ongoing additive manufacturing efforts, including ways to address technical challenges. Furthermore, DOD components participate in defense manufacturing conferences and defense additive manufacturing symposiums; informal meetings; and America Makes discussions, known as program management reviews. We observed the September 2014 America Makes program management review, during which representatives from the government, private industry, and academia discussed the status of the America Makes research projects and their additive manufacturing efforts. We also observed an additive manufacturing meeting that included participants from OSD, the Army, the Navy, and the Defense Logistics Agency to discuss the status of their ongoing additive manufacturing efforts and collaboration opportunities. For example, the Navy and the Defense Logistics Agency discussed their efforts to survey existing parts that would be candidates for additive manufacturing. The officials stated that they are willing to share information but are focusing on their service- specific efforts. Additionally, DOD participates in the Government Organization for Additive Manufacturing (GO Additive), which is an informal, government-wide voluntary-participation group. The purpose of the group is, among other things, to facilitate collaboration among individuals from federal government organizations, such as DOD, that have an interest in additive manufacturing. According to Air Force officials, the group may develop a list of qualified materials and certified parts. Although DOD components use various mechanisms to coordinate information on additive manufacturing, DOD does not systematically track the components’ additive manufacturing efforts department-wide. Specifically, DOD does not systematically track additive manufacturing efforts, to include (1) all activities performed and resources expended by DOD, including equipment and funding amounts; and (2) results of their activities, including actual and potential performance and combat capability improvements, cost savings, and lessons learned. Standards for Internal Control in the Federal Government state that it is important for organizations to have complete, accurate, and consistent data to inform policy, document performance, and support decision making. The standards also call for management to track major agency achievements, and to communicate the results of this information. In addition, our past work has identified practices for enhancing and sustaining agency coordination efforts that include, among other things, designating leadership, which is a necessary element for a collaborative working relationship. However, DOD officials whom we interviewed could not identify a specific DOD entity that systematically tracked all activities or resources across the department, including equipment and funding amounts, related to additive manufacturing. Further, while Army, Navy, and Air Force Manufacturing Technology program officials provided us a list of their respective additive manufacturing activities and some funding information, variances in the types of information provided caused it not to be comparable across the services. Since no one DOD entity, such as OSD, systematically tracks all aspects of additive manufacturing, DOD officials could not readily tell us the activities underway or the amount of funding being used for DOD’s additive manufacturing efforts. According to an OSD official within the Office of Manufacturing and Industrial Base Policy, the department does not identify investments of additive manufacturing in the budget exhibits to this level of detail. The official stated that the department identifies overall manufacturing technology investments, but it does not specifically break out additive manufacturing. In addition to the research and development efforts, the official stated that DOD has ongoing additive manufacturing activities within the operational communities, such as military depots and arsenals, and it does not systematically track these communities either. Additionally, while DOD components share information on the additive manufacturing equipment they own, DOD does not systematically track these machines to ensure that the components are aware of each other’s additive manufacturing equipment. DOD has additive manufacturing machines whose costs range from a few thousand dollars to millions of dollars. In a constrained budget environment, it is also important to leverage these resources. According to officials within the U.S. Army RDECOM, through coordination groups, such as that command’s community of practice, officials share and understand each other’s equipment and capabilities. In addition, according to these officials, the Navy and Air Force have provided information to the Army regarding their respective departments’ equipment. According to Army and Navy officials, the Army and Navy also have equipment lists posted on a DOD collaboration website called milSuite. According to an official at the U.S. Air Force Research Laboratory, the Air Force does not have an official inventory listing of additive manufacturing equipment. However, the official added that a team has accomplished a recent tasking and visits to the Air Logistics complexes to determine the equipment and capabilities available and in use. Furthermore, DOD does not systematically track actual or potential performance and combat capability improvements, cost savings, or lessons learned. DOD component officials we interviewed have shared— within their respective components and to a lesser degree with other components—information on their individual performance and combat capability improvements, as well as on some cost savings attributable to additive manufacturing. For example, according to Army Rapid Equipping Force officials, they participate in a community of practice to share their lessons learned so that others can be informed about the needs of end users when developing their research priorities. The various DOD components are at different stages of research and implementation. However, DOD does not have a systematic process to obtain and disseminate the results and lessons learned across the components. Without this information, the department may not be able to leverage the components’ respective experiences. U.S. Army RDECOM officials agreed that it is important to improve cross- communication among the services and agencies, to avoid having to re- invent advances while they continue to expand the implementation of these technologies and capabilities. The officials added that the Materials and Manufacturing Processes Community of Interest already reports to the Office of the Assistant Secretary of Defense for Research and Engineering and to the DOD Science and Technology executives on the science and technology funding associated with materials and manufacturing. Therefore, the Army believes that DOD already has oversight and awareness. According to its chairperson, the Materials and Manufacturing Processes Community of Interest (a group that comprises eight technical teams) performs some level of activity in additive manufacturing, but it does not have a team that focuses solely on additive manufacturing. The chairperson added that this community of interest does not systematically track all aspects of additive manufacturing, such as medical, and that the information that is tracked and communicated to OSD is rolled up to a high level and submitted to the Office of the Assistant Secretary of Defense for Research and Engineering. An official within that office agreed that additional coordination of additive manufacturing efforts across the department would be helpful. The official stated that the office does not track all aspects of additive manufacturing. DOD does not systematically track all department-wide additive manufacturing efforts because the department has not designated a lead or focal point at a senior level, such as OSD, to oversee the development and implementation of an approach to department-wide coordination. Specifically, the department has not established a lead to develop and implement an approach for systematically (1) tracking department-wide activities and resources, including funding and an inventory of additive manufacturing equipment; and results of these activities, such as additive manufacturing performance and combat capability improvements and cost savings, along with lessons learned; and (2) disseminating the results of these activities, and an inventory of additive manufacturing equipment. A senior official within the Office of Manufacturing and Industrial Base Policy was aware of the various coordination groups. The official also saw value in collecting certain types of additive manufacturing information. We recognize that while additive manufacturing has been in existence since the 1980s, it is still in its early stages as compared with the techniques of conventional manufacturing, especially with respect to producing critical parts such as those for aircraft. As the technology evolves, it is important for OSD to systematically track and disseminate the results of these additive manufacturing efforts department-wide. Without designating a lead or focal point responsible for developing an approach for systematically (1) tracking department-wide activities and resources, and results of these activities; and (2) disseminating, department-wide, the results of these activities and an inventory of additive manufacturing equipment, DOD officials may not obtain the information they need to leverage resources and ongoing experiences of the various components. Additive manufacturing has been in existence since the 1980s, and DOD has begun looking toward utilizing it to make existing product supply chains more efficient by enabling on-demand production, which could reduce the need to maintain large product inventories and spare parts; and enabling the production of parts and products closer to the location of their consumers, thereby helping DOD to achieve its missions. The technology is in its relative infancy, and it may be years or decades before it can achieve levels of confidence comparable to those available from conventional manufacturing processes. Across the department the various DOD components are engaged in activities and are expending resources in their respective efforts to determine how to use additive manufacturing to produce critical products. However, DOD does not systematically track and disseminate the results of additive manufacturing efforts department-wide, nor has it designated a lead to coordinate these efforts. As a result, DOD may not have the information it needs to leverage resources and lessons learned from additive manufacturing efforts and thereby facilitate the adoption of the technology across the department. To help ensure that DOD systematically tracks and disseminates the results of additive manufacturing efforts department-wide, we recommend that the Secretary of Defense direct the following action: Designate a lead or focal point, at the OSD level, responsible for developing and facilitating the implementation of an approach for systematically tracking and disseminating information. The lead or focal point should, among other things, track department-wide activities and resources, including funding and an inventory of additive manufacturing equipment; and results of these activities, such as additive manufacturing performance and combat capability improvements and cost savings, along with lessons learned; and disseminate the results of these activities, and an inventory of additive manufacturing equipment. We provided a draft of this report to DOD for review and comment; the department provided technical comments that we considered and incorporated as appropriate. DOD also provided written comments on our recommendation, which are reprinted in appendix III. In commenting on this draft, DOD concurred with our recommendation that DOD designate an OSD lead or focal point to be responsible for developing and implementing an approach for systematically tracking department-wide activities and resources, and results of these activities; and disseminating these results, and an inventory of additive manufacturing equipment, to facilitate adoption of the technology across the department. In response to this recommendation, DOD stated that within 90 days the department will make a determination and designation of the appropriate lead or focal point within OSD to be responsible for developing and facilitating the implementation of an approach for systematically tracking and disseminating information on additive manufacturing within the department. We are sending copies of this report to appropriate congressional committees; the Secretary of Defense; the Secretaries of the Army, Navy, and Air Force, and the Commandant of the Marine Corps; the directors of Defense Logistics Agency and Defense Advanced Research Projects Agency; the Assistant Secretaries of Defense for Research and Engineering, and Health Affairs; Deputy Assistant Secretaries of Defense for Manufacturing and Industrial Base Policy, and Maintenance Policy and Programs; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-5257 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made contributions to this report are listed in appendix IV. The Department of Defense (DOD) provided its briefing document to GAO on July 30, 2014. To determine the extent to which the briefing document to the Senate Armed Services Committee (henceforth referred to as “the Committee”) addresses the three directed elements, two GAO analysts concurrently assessed DOD’s May 2014 briefing document to determine whether it included the following Committee-directed elements: (1) potential benefits and constraints of additive manufacturing, (2) how the additive manufacturing process could or could not contribute to DOD missions, and (3) what technologies being developed at America Makes are being transitioned for DOD use. The analysts were consistent in their respective assessments of whether the briefing included the elements and therefore it was not necessary for a third analyst to resolve any differences. We assessed the briefing document, with the recognition that it was not meant to be a stand-alone document, but rather accompanied an oral briefing. We met with officials from the Office of Manufacturing and Industrial Base Policy, America Makes, and the military services to determine the extent to which they were involved in creating the briefing document and to obtain additional information about additive manufacturing. We also shared with the DOD officials, including Office of Manufacturing and Industrial Base Policy officials, our preliminary assessment of DOD’s briefing document to obtain their comments. To determine the extent to which DOD has taken steps to implement additive manufacturing to improve performance, improve combat capability, and achieve cost savings, we reviewed DOD planning documents, such as the December 2014 DOD Manufacturing Technology Program report and briefing reports documenting the status of DOD’s additive manufacturing efforts, as well as examples of any actual or potential performance and combat capability improvements, and examples of actual or potential cost savings. We also interviewed officials within the military services, Defense Logistics Agency, and Walter Reed National Military Medical Center to further discuss any current and potential applications of additive manufacturing, and any improvements and cost savings associated with using the technology. We did not review efforts related to additive manufacturing conducted by contractors for DOD. To determine the extent to which DOD uses mechanisms to coordinate and systematically track additive manufacturing efforts across the department, we reviewed DOD coordination-related documents, such as charters and briefing slides, summarizing the purpose and results of any current DOD efforts related to advancing the department’s use of additive manufacturing—that is, efforts by the Office of the Secretary of Defense (OSD), Defense Logistics Agency, Defense Advanced Research Projects Agency, and the services. We reviewed GAO’s key considerations for implementing interagency collaborative mechanisms, such as designating leadership, which is a necessary element for a collaborative working relationship. We identified examples of coordination groups that DOD participates in to discuss ongoing additive manufacturing efforts. While we did not assess these groups to determine whether there were any coordination deficiencies, we made some observations based on GAO’s reported collaborative mechanisms and practices for enhancing and sustaining these efforts. We also reviewed the Standards for Internal Control in the Federal Government, which emphasizes the importance of top-level management tracking the various components’ achievements, to assess the extent to which DOD systematically tracks additive manufacturing efforts department-wide. Additionally, we discussed with OSD, Army, Navy, Air Force, Defense Logistics Agency, and Defense Advanced Research Projects Agency officials (1) any actions that have been taken for coordinating additive manufacturing efforts across the department, and (2) the extent to which DOD systematically tracks additive manufacturing efforts. Tables 4 and 5 present the DOD and non-DOD organizations we met with during our review. We conducted this performance audit from July 2014 to October 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Office of the Secretary of Defense (OSD) Office of the Under Secretary of Defense for Acquisition, Technology and Logistics, reporting to the Secretary of Defense, is responsible for all matters relating to departmental acquisition systems, as well as research and development, advanced technology, and developmental test and evaluation, among other things. The OSD Office of the Assistant Secretary of Defense for Research and Engineering, reporting to the Under Secretary of Defense for Acquisition, Technology and Logistics, is responsible for providing science and engineering integrity leadership throughout DOD and facilitating the sharing of best practices to promote the integrity of DOD scientific and engineering activities. According to DOD senior officials, the Materials and Manufacturing Processes community of interest is one of 17 department-wide coordination groups organized by the Office of the Assistant Secretary of Defense for Research and Engineering to provide broad oversight of the DOD components’ efforts in the Science and Technology areas for which the department has responsibilities. The senior officials added that this community of interest does not track all aspects of additive manufacturing and that the information that is tracked and communicated to the Office of the Assistant Secretary of Defense for Research and Engineering is rolled up to a high level. The OSD Office of the Deputy Assistant Secretary of Defense for Maintenance Policy and Programs provides the functional expertise for centralized maintenance policy and management oversight for all weapon systems and military equipment maintenance programs and related resources within DOD. The OSD Office of the Deputy Assistant Secretary of Defense for Manufacturing and Industrial Base Policy, reporting to the Under Secretary of Defense for Acquisition, Technology and Logistics, develops DOD policy and provides guidance, oversight, and technical assistance on assessing or investing in defense industrial capabilities, and has oversight responsibility for the Manufacturing Technology program, among other programs, which develops technologies and processes that ensure the affordable and timely production and sustainment of defense systems, including additive manufacturing. In addition, OSD manages the Defense-wide Manufacturing Science and Technology program, which seeks to address cross-cutting initiatives that are beyond the scope of any one military service or defense agency. The Army, the Navy, the Air Force, and the Defense Logistics Agency each have their own manufacturing technology programs, which select and execute activities, such as additive manufacturing research activities. The Army, the Navy, and the Air Force have research and development laboratories—that is, U.S. Army Research, Development and Engineering Command; Office of Naval Research; and U.S. Air Force Research Laboratory—for projects on the use of new materials, processes, and applications for additive manufacturing. Army, Navy, and Air Force depots and arsenals use additive manufacturing to produce plastic parts and prototypes for tooling and repairs, such as dust caps for radios, to reduce costs and turnaround time. The Army Rapid Equipping Force will be reporting to the U.S. Army Training and Doctrine Command in October 2015, according to Army officials. It uses additive manufacturing to produce prototypes for repairs, such as tooling and fixtures, to reduce costs and turnaround time. Navy components, including the Office of the Chief of Naval Operations, Navy Business Office; the Naval Air Systems Command; and Naval Sea Systems Command, plan to use additive manufacturing to enable a dominant, adaptive, and innovative Naval force that is ready, able, and sustainable. According to Navy officials, in November 2013, the Chief of Naval Operations directed the Deputy Chief of Naval Operations for Fleet Readiness and Logistics to develop, de-conflict, and manage additive manufacturing efforts across the Navy. That office has since developed Navy’s 20-year additive manufacturing vision, according to Navy officials. The Defense Advanced Research Projects Agency Defense Sciences Office identifies and pursues high-risk, high-payoff fundamental research initiatives across a broad spectrum of science and engineering disciplines, and transforms these initiatives into radically new, game-changing technologies for U.S. national security. According to a senior Defense Advanced Research Projects Agency official, the agency has initiated the Open Manufacturing program, which allows officials to capture and understand the additive concepts, so that they can rapidly predict with high confidence how the finished part will perform. The program has two facilities—one at Pennsylvania State University and the other at the U.S. Army Research Laboratory—establishing permanent reference repositories and serving as testing centers to demonstrate applications of the technology being developed and as a catalyst to accelerate adoption of the technology. The Defense Logistics Agency procures parts for the military services and is developing a framework to determine how to use additive manufacturing, according to Defense Logistics Agency officials. The Walter Reed National Military Medical Center 3D Medical Applications Center is a military treatment facility that provides, among other things, computer-aided design and computer-aided manufacturing for producing medical models and custom implants through additive manufacturing. The Walter Reed National Military Medical Center falls within the National Capital Region Medical Directorate and is controlled by the Defense Health Agency, which in turn reports to the Assistant Secretary of Defense for Health Affairs. In addition to the contact named above, Marilyn Wasleski, Assistant Director; Dawn Godfrey; Richard Hung; Carol Petersen; Andrew Stavisky; Amie Steele; Sabrina Streagle; Sarah Veale; Angela Watson; Cheryl Weissman; and Alexander Welsh made key contributions to this report. | Additive manufacturing—building products layer-by-layer in a process often referred to as three-dimensional (3D) printing—has the potential to improve aspects of DOD's mission and operations. DOD and other organizations, such as America Makes, are determining how to address challenges to adopt this technology throughout the department. Senate Report 113-44 directed DOD to submit a briefing or report on additive manufacturing to the Senate Armed Services Committee that describes three elements. Senate Report 113-176 included a provision that GAO review DOD's use of additive manufacturing. This report addresses the extent to which (1) DOD's briefing to the Committee addresses the directed elements; (2) DOD has taken steps to implement additive manufacturing to improve performance, improve combat capability, and achieve cost savings; and (3) DOD uses mechanisms to coordinate and systematically track additive manufacturing efforts across the department. GAO reviewed and analyzed relevant DOD documents and interviewed DOD and academia officials. GAO determined that the Department of Defense's (DOD) May 2014 additive manufacturing briefing for the Senate Armed Services Committee addressed the three directed elements—namely, potential benefits and constraints; potential contributions to DOD mission; and transition of the technologies of the National Additive Manufacturing Innovation Institute (“America Makes,” a public-private partnership established to accelerate additive manufacturing) for DOD use. DOD has taken steps to implement additive manufacturing to improve performance and combat capability, and to achieve cost savings. GAO obtained information on multiple efforts being conducted across DOD components. DOD uses additive manufacturing for design and prototyping and for some production, such as parts for medical applications; and it is conducting research to determine how to use the technology for new applications. For example, according to a senior Air Force official, the Air Force is researching potential performance improvements that may be achieved by embedding devices such as antennas within helmets through additive manufacturing that could enable improved communications; and the Army used additive manufacturing to prototype aspects of a Joint Service Aircrew Mask to test a design change, and reported thousands of dollars thereby saved in design development (see figure). DOD uses various mechanisms to coordinate on additive manufacturing efforts, but it does not systematically track components' efforts department-wide. DOD components share information regarding additive manufacturing via mechanisms such as working groups and conferences that, according to DOD officials, provide opportunities to discuss challenges experienced in implementing additive manufacturing—for example, qualifying materials and certifying parts. However, DOD does not systematically track additive manufacturing efforts, to include (1) all activities performed and resources expended by DOD; and (2) results of these activities, including actual and potential performance and combat capability improvements, cost savings, and lessons learned. DOD has not designated a lead or focal point at a senior level to systematically track and disseminate the results of these efforts, including activities and lessons learned, department-wide. Without designating a lead to track information on additive manufacturing efforts, which is consistent with federal internal control standards, DOD officials may not obtain the information they need to leverage ongoing efforts. GAO recommends that DOD designate an Office of the Secretary of Defense lead to be responsible for developing and implementing an approach for systematically tracking department-wide activities and resources, and results of these activities; and for disseminating these results to facilitate adoption of the technology across the department. DOD concurred with the recommendation. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
As of July 2014, about 60,000 community retail pharmacies in the United States dispensed prescription drugs of which approximately 66 percent were chain retail pharmacies and the remaining 34 percent were independent pharmacies, according to an industry study. In 2015, retail pharmacies dispensed about 4 billion prescriptions, while mail order pharmacies dispensed over 200 million prescriptions, according to one study. Pharmacies’ prescription drug container labeling practices may be affected by several types of entities: PBMs that help many third-party payers—such as health plans— manage their prescription drug benefits by operating mail order pharmacies, assembling retail pharmacy networks that include both chain and independent pharmacies, and providing other services. PBMs issue corporate policies that govern their mail order pharmacy operations and enter into contracts with retail pharmacies in their networks that set forth the terms and conditions for dispensing prescriptions to health plan enrollees. Chain pharmacy companies that operate chain retail pharmacies with four or more locations. These companies issue corporate policies that govern their retail pharmacy operations. PSAOs that provide a broad range of administrative services to networks of retail pharmacies, including contract negotiation with third-party payers. To establish these networks, PSAOs enter into contracts with retail pharmacies—generally independent pharmacies—that set forth the duties and obligations of the PSAO and each pharmacy. State pharmacy regulating bodies that oversee the practice of pharmacy through activities such as licensing pharmacies and issuing regulations. According to the National Association of Boards of Pharmacy, which represents state boards of pharmacy, as of February 2016, only one state—Massachusetts—requires pharmacies to provide large-print labels to individuals who are visually impaired and elderly upon request. Pharmacy accreditation organizations that certify pharmacies meet a predetermined set of standards for pharmacy care or functions, which may include elements for providing services to individuals who are blind or visually impaired. Other entities may also develop or disseminate guidance on prescription drug container labels that may affect pharmacies’ labeling practices. For example, standard-setting organizations may develop prescription drug container labeling standards and entities, such as state pharmacy regulating bodies, can incorporate these standards into their pharmacy labeling requirements. Industry groups representing pharmacies or pharmacists and advocacy groups for individuals who are blind or visually impaired also may develop guidance, including prescription drug container labeling guidance, or use tools, such as newsletters or website postings, to disseminate guidance or other information to their members. Accessible labels can make information on prescription drug container labels more easily available to individuals who are blind or visually impaired. Pharmacies can purchase hardware and software from private technology vendors to produce labels in audible, braille, and large print formats. Audible labels allow individuals to hear prescription drug container label information. Technologies for audible labels include talking pill bottles that allow pharmacists to create a voice or digital recording of label information and tags that can be encoded with label information, affixed to prescription drug containers, and are read out by a separate device. Braille labels allow individuals who are blind or visually impaired to read prescription drug container label information by touch, and large print labels enhance the size of label text for easier viewing. Pharmacists can produce hard copy braille or large print labels and affix them to the prescription drug container. See figure 1 for examples of accessible labels. In 2012, the U.S. Access Board convened an 18-member working group to develop best practices to make prescription drug container label information accessible to individuals who are blind or visually impaired. This working group included representatives from mail order pharmacies; chain pharmacy companies; advocacy groups for individuals who are blind or visually impaired; and industry groups representing pharmacies and pharmacists. The working group’s July 2013 report identified 34 best practices. These best practices offer guidance to pharmacists on how to deliver and provide accessible labels and their adoption is voluntary. The best practices include those that promote access to prescription drug container label information in all accessible labels formats as well as those specific to audible, braille, and large print formats. For example, one best practice that applies to all accessible label formats is for pharmacies to not impose an extra fee to individuals to cover the cost of providing accessible labels or equipment dedicated for prescription drug container label access. The mail order pharmacies operated by the 4 PBMs, some retail pharmacies operated by the 9 chain pharmacy companies, and some of the 18 individual chain and independent retail pharmacy locations that we contacted for this review said they can provide accessible labels as of March 31, 2016. For example, officials from the 4 PBMs reported their mail order pharmacies generally can provide accessible labels, including audible, braille, and large print labels. Similarly, officials from 6 of the 9 nine chain pharmacy companies reported their retail pharmacies can provide accessible labels. Additionally, officials from 8 of the 18 randomly selected individual chain and independent retail pharmacy locations reported they can provide accessible labels. Of the 8 individual retail pharmacy locations that reported that they can provide accessible labels, officials from more chain pharmacies—7 pharmacies—reported being able to provide accessible labels than independent pharmacies—1 pharmacy. Furthermore, officials from the PBMs more often reported that their mail order pharmacies can provide audible and braille labels, while officials from the chain pharmacy companies and individual retail pharmacy locations more often reported that their retail pharmacies can provide audible labels. (See table 1.) The four PBMs that can provide accessible labels through their mail order pharmacies dispensed prescriptions with these labels from a central location and delivered them directly to customers. These PBMs used the same technologies to provide audible and braille labels, but differed in how they can provide large print labels through their mail order pharmacies. See table 2 for more information on how these PBMs can provide accessible labels through their mail order pharmacies. The six chain pharmacy companies that can provide accessible labels through their retail pharmacies varied in terms of the accessible label formats they can provide, the number of retail locations that can provide them, and timeframes for providing prescriptions with these labels. For example, officials from one chain pharmacy company reported to us their retail locations can provide accessible labels in all formats, while others reported to us their retail locations can provide accessible labels in one or two formats. Also, officials from five companies reported to us that they can provide accessible labels in all retail locations, while officials from one company reported they can provide accessible labels in one retail location. Further, some of these companies can provide prescriptions with accessible labels available with same-day pickup, while others delivered them directly to customers. Officials from the three chain pharmacy companies that cannot provide accessible labels reported that they can make other accommodations, such as providing information on a separate piece of paper in large print. See table 3 for more information on how selected chain pharmacy companies can provide accessible labels. Officials from the four PBMs and three of the six chain pharmacy companies that can provide accessible labels through their pharmacies reported that the percent of prescriptions dispensed with such labels was generally low—less than 1 percent. For example, officials from one PBM stated their mail order pharmacy dispensed an average of about 21,000 prescriptions with accessible labels out of about 11.5 million total prescriptions dispensed each month during the first quarter of calendar year 2016. Officials from another PBM stated that they dispensed about 1,200 prescriptions with accessible labels out of about 3 million total prescriptions dispensed each month during the first quarter of 2016. Similarly, officials from one chain pharmacy company stated that their retail pharmacy locations dispensed an average of about 240 prescriptions with accessible labels out of about 6.5 million total prescriptions dispensed each month during the first quarter of 2016. Officials from the three remaining chain pharmacy companies could not provide us with the percent of prescriptions dispensed with accessible labels. However, officials from one of these companies stated that one of their retail locations dispensed prescriptions with accessible labels to 6 to 10 individuals who are blind or visually impaired each month and dispensed between 3,200 and 5,600 total prescriptions each month during the first quarter of 2016. Officials from the four PBMs, six chain pharmacy companies, and eight individual retail pharmacy locations that we contacted and that can provide accessible labels reported that their mail order and retail pharmacies have generally implemented most of the 34 best practices for these labels. Of these 34 best practices, 14 apply to all accessible labels (henceforth referred to as all-format best practices), 3 apply to audible labels, 7 apply to braille labels, and 10 apply to large print labels. Officials from the four PBMs, four of the six chain pharmacy companies, and eight individual retail pharmacy locations generally reported that their mail order and retail pharmacies have implemented most of the 14 all- format best practices for accessible labels. These all-format best practices include specific recommendations to promote access to prescription drug container label information in all available formats— including audible, braille, and large print labels—and include practices such as pharmacists encouraging patients and their representatives to communicate their needs to the pharmacist. All selected PBMs, chain pharmacy companies, and individual retail pharmacy locations that provide accessible labels implemented practices such as making prescription drug container labels available in various accessible formats, as well as using the same quality control processes for prescription drug container labels in accessible formats as print prescription drug container labels. See table 4 for further detail on all-format best practices implemented in pharmacies by selected PBMs, chain pharmacy companies, and individual retail pharmacy locations. Officials from the four PBMs, five of the six chain pharmacy companies, and eight individual retail pharmacy locations told us that their mail order and retail pharmacies implemented most of the applicable format-specific best practices for audible, braille, and large print labels. These format- specific best practices include specific recommendations on how to provide these labels and some of these practices only apply under certain circumstances. For example, six of the seven format-specific best practices for braille prescription drug container labels only apply to hard copy braille labels. The most commonly implemented applicable format- specific best practices across the PBMs, chain pharmacy companies, and individual retail pharmacy locations included speaking in a clear voice when recording an audible label, using transparent materials when embossing braille labels, and printing text in the highest possible contrast for large print labels. See tables 5 through 7 for further detail on the audible, braille, and large print format-specific best practices implemented by PBMs, chain pharmacy companies, and individual retail pharmacy locations. Stakeholders we contacted most often identified three key barriers that individuals who are blind or visually impaired continue to face in accessing prescription drug container label information even after the publication of the best practices in 2013. Some of these stakeholders told us that the best practices have reduced some barriers to accessing prescription drug container label information for individuals who are blind or visually impaired by increasing pharmacies’ awareness of these barriers or encouraging more pharmacies to provide accessible labels. However, other stakeholders told us that the types of barriers that individuals who are blind or visually impaired face have not changed. Inability to identify medications independently. Stakeholders told us that individuals who are blind or visually impaired continue to face barriers identifying medications independently. Without accessible labels, individuals who are blind or visually impaired will need to rely on a pharmacist or caregiver to help them identify medications. For example, some stakeholders said that pharmacists may offer medication counseling, such as allowing individuals who are blind or visually impaired to feel the size, shape, and weight of their medication and answering questions about dosage or side effects. Pharmacists may also place rubber bands on some prescription drug containers or use differently sized containers to help individuals who are blind or visually impaired identify different medications by their containers. However, according to some stakeholders, these alternative methods may not be reliable; for example, a rubber band may be removed from a prescription drug container or caregivers may not understand medication directions. Further, stakeholders stated that if accessible labels are not securely affixed to the prescription drug containers, then they can fall off and get mixed up, which could increase individuals’ risk for medication errors. Inability to identify pharmacies that can provide accessible labels. Stakeholders told us that individuals who are blind or visually impaired generally do not know which pharmacies can provide accessible labels. Many stakeholders stated that the inability to identify pharmacies that can provide these labels stems from limited or no efforts to advertise accessible labels in pharmacies and no centralized database that provides information on pharmacies that can provide these labels. While officials from the four selected PBMs reported taking steps to inform individuals who are blind or visually impaired about the accessible labels that their mail order pharmacies can provide—such as two PBMs reporting training customer service representatives to ask specific questions to identify individuals who could benefit from prescriptions with accessible labels and help them identify the accessible label that would best fit their needs—other selected stakeholders that operate pharmacies told us that they do not advertise the accessible labels their pharmacies can provide. Specifically, officials from 2 of our 9 selected chain pharmacy companies and 4 of 18 individual retail pharmacy locations that submitted questionnaire responses reported to us that they generally do not advertise the accessible labels their pharmacy can provide, or that customers need to ask pharmacists about these labels in order to have them included on the prescription containers. Officials from an advocacy group also reported that individuals who are blind or visually impaired continue to be unable to identify pharmacies that can provide accessible labels because there is no centralized database that provides information on which pharmacies can provide such labels. Officials from the two technology vendors told us that they compiled information on retail pharmacies that can provide accessible labels sold by their companies; however, their databases are limited to locations that can provide their specific products and do not include retail pharmacy locations that can provide accessible labels made by other technology vendors. Inability to obtain prescriptions with accessible labels on the same day as requested. Stakeholders told us that individuals who are blind or visually impaired may be unable to obtain prescriptions with accessible labels on the same day as requested. For example, officials from two chain pharmacy companies stated that individuals who are blind or visually impaired can work with retail pharmacy staff to order prescriptions with accessible labels through mail order pharmacies and have these accessible prescriptions sent directly to these individuals at a later date. Further, officials from these chain pharmacy companies reported that it may take up to 72 hours from the time an individual requests a prescription with an accessible label to the time the individual receives that prescription. Officials from one advocacy group raised concerns that this delay in obtaining prescriptions through the mail order pharmacy is unreasonable for certain time-sensitive prescriptions that must be dispensed immediately, such as antibiotics to treat an infection. Stakeholders most often identified four key challenges that pharmacies had in providing accessible labels or implementing the best practices and identified steps that could address some of these challenges. Lack of awareness of the best practices. Stakeholders identified lack of awareness of the best practices by pharmacies and others as a key challenge: Pharmacies (including pharmacists and pharmacy staff). Federal agencies, advocacy groups, technology vendors, and an accreditation organization told us that pharmacies were not aware of the best practices. Further, officials from 7 of 18 individual retail pharmacy locations stated that they first learned about the best practices when we contacted them. Additionally, some stakeholders told us that individuals who are blind or visually impaired are generally unaware of the best practices and, as a result, may not request accessible labels at their pharmacies. Other stakeholders. Other stakeholders that could affect pharmacies’ labeling practices or provide medical services to individuals who are blind or visually impaired were unaware of the best practices. For example, the four states and an industry group representing physicians told us that they were unaware of the best practices prior to our contact with them. After our outreach, one state published an article about the best practices in its newsletter and discussed these practices with pharmacists, pharmacy staff, and the public at two public meetings in May and July 2016. Of those stakeholders who identified this challenge, many stated that greater dissemination of information on the best practices could increase awareness of the best practices. Additionally, NCD officials told us that they would continue to disseminate information on the best practices as long as stakeholders remained unaware of them. Low demand and high costs for providing accessible labels. Another challenge that stakeholders identified is that pharmacies had low demand and incurred high costs to provide accessible labels. Officials from five chain pharmacy companies and four individual retail pharmacy locations told us that they have had relatively few or no customer requests for accessible labels. Some stakeholders reported that the demand for these labels does not justify the costs to provide accessible labels. These costs include staff costs—such as training or the time needed to produce these labels—as well as the costs associated with the technology required to produce the labels—such as purchasing software, printers, or labels. Two stakeholders told us that the initial costs to purchase this technology may range from a few hundred to a few thousand dollars for each individual retail pharmacy location. Further, these pharmacy locations may incur ongoing costs, such as annual fees of up to a few hundred dollars to cover technical assistance and other services or fees of up to a few dollars to purchase additional accessible labels. Additionally, many stakeholders stated that it may be costly for larger chain pharmacy companies to implement technology and train staff in many locations, while smaller independent pharmacies may have difficulty absorbing the costs of purchasing the new technology they need to produce accessible labels. Of those stakeholders who identified this challenge, some stated that financial support for pharmacies, such as third-party reimbursement, could address high costs that pharmacies incur to provide accessible labels that meet the best practices. These stakeholders stated that there is currently no direct financial support for providing these labels and these labels are provided free of charge to customers. Officials from four chain pharmacy companies told us that pharmacies may be willing to provide accessible labels that meet the best practices if third parties, such as health plans, were willing to reimburse or share in the costs of producing these labels. Additionally, officials from one industry group representing pharmacists stated that pharmacies may be more willing to provide accessible labels that meet the best practices if grant money were available to cover costs for producing these labels. Technical challenges for providing accessible labels. Stakeholders identified some technical challenges for providing accessible labels that meet the best practices. For example, officials from one state and four chain pharmacy companies told us that pharmacies face challenges fitting all the required prescription label information in large print formats on small prescription drug containers. Officials from one technology vendor stated that printing the large print labels in a booklet form, which can then be affixed to the prescription drug container, could address this challenge. Additionally, officials from a chain pharmacy company, a state regulating body, and a federal agency told us that pharmacists typically cannot independently verify information on braille labels to ensure their accuracy. Specifically, three stakeholders expressed concern that pharmacists who cannot read braille cannot determine if the braille translation is accurate and therefore must rely on the accuracy of the braille technology to translate prescription label information to braille. Absence of requirements to implement the best practices. Stakeholders told us that some pharmacies are not implementing the best practices, given an absence of requirements to do so by applicable corporate policies, contracts, state regulations, or accreditation standards. Corporate pharmacy policies. Officials from all four PBMs and four of the nine chain pharmacy companies told us that they incorporated some, but not all, of the best practices into their corporate policies that pharmacies must follow. However, officials from three chain pharmacy companies told us that their corporate policies do not include any of the best practices and their retail pharmacies cannot offer any services for individuals who are blind or visually impaired other than what has been approved at the corporate level. Contracts with retail pharmacies. Officials from all four PBMs and all three PSAOs told us that their contracts with retail pharmacies in their networks do not require pharmacies to implement the best practices. Pharmacy accreditation standards. Officials from two accreditation organizations told us that their pharmacy standards do not incorporate the best practices. Pharmacies must comply with standards for the accreditation processes they choose to undergo. Two accreditation organizations reported that they have standards that address services for individuals with disabilities, but these standards are not specific to drug labeling for the visually impaired and do not incorporate the best practices. State regulations. Officials from all four states told us that their state’s regulations do not incorporate the best practices. They also stated that they did not have any plans to update their current regulations to incorporate the best practices; however, officials from one state told us that they may consider doing so in the future. Massachusetts does have a law requiring the provision of large print labels to the visually impaired and elderly upon request, but the font size requirement differs from that of the best practices. Of those stakeholders who identified this challenge, most stakeholders told us that more pharmacies may implement the best practices if corporate pharmacy policies or pharmacy accreditation standards incorporated them. For example, officials from three chain pharmacy companies, one advocacy group, one industry group, and one technology vendor told us that pharmacies could implement the best practices if corporate pharmacy policies included them. Further, officials from two individual retail pharmacy locations stated that they require corporate approval to implement any technologies to produce accessible labels that meet the best practices. Additionally, officials from one PBM and one technology vendor told us that more pharmacies would implement the best practices if pharmacy accreditation standards incorporated them. We found that NCD conducted limited campaign activities from July 2013 through August 2016 to inform and educate pharmacies (including pharmacists and pharmacy staff), individuals who are blind or visually impaired, and the public about the best practices. For example, prior to the publication of the U.S. Access Board working group’s report, NCD sent emails to members of the working group to solicit ideas on how the agency could coordinate with working group members to disseminate information on the best practices once they were published. From July 2013 through February 2016, NCD issued an agency statement and two press releases through its website, listserv, and online social media about the best practices and pharmacies’ agreements with advocacy groups to provide accessible labels; hosted a conference call with three advocacy groups to discuss how they could conduct outreach as part of NCD’s campaign; and published a blog post on accessible labels. However, the agency did not conduct any campaign activities in 2015. From June through August 2016, NCD developed a brochure on some of the best practices, disseminated the brochure through its website, and coordinated with the U.S. Access Board, one industry group representing pharmacists, and one chain pharmacy company to disseminate this brochure. See table 8 for a timeline of NCD’s campaign activities. Most of the selected stakeholders we spoke with—including PBMs, chain pharmacy companies, states, and advocacy groups—have not had any communication with NCD about its campaign, and, as previously discussed, some were unaware of the best practices. When we first interviewed NCD officials in February 2016, they could not provide us with a fully developed and documented plan for conducting and evaluating the agency’s campaign nor did they do so in subsequent follow-up we had with them through August 2016. However, in September 2016 during a meeting to review NCD’s campaign activities, officials told us they had developed a plan in December 2013 for conducting campaign activities that were to occur throughout 2014. These activities consisted of developing a virtual toolkit for stakeholders to use for planning their own outreach, according to documentation NCD provided. However, we determined that NDC did not conduct most of these activities. Subsequent to our September 2016 meeting, officials provided us with a corrective action plan with timeframes for conducting future campaign activities through fiscal year 2017, including some of the activities that NCD did not conduct in 2014. The development of this corrective action plan is a positive step to conduct campaign activities. However, neither the original plan nor the corrective action plan assigned responsibilities for campaign activities. This is inconsistent with federal internal control standards, which indicate that an agency should assign responsibilities to achieve its objectives. Given that most of the activities NDC originally planned for 2014 never occurred, this lack of specificity regarding responsibilities is concerning because it does not provide assurance that the agency will conduct future campaign activities as planned. Further, officials could not provide us any plans for how they will evaluate the agency’s campaign activities. NCD officials stated that the agency has not evaluated nor has any plans to evaluate its campaign activities, other than tracking the number of likes or retweets on its social media posts. Federal internal control standards indicate that an agency should design and execute a plan to evaluate its activities, document evaluation results, and identify corrective actions to address identified deficiencies. In the absence of a formal evaluation plan, NCD officials will be unable to determine the effectiveness of their campaign activities and make adjustments, as needed. The U.S. Access Board published best practices to make information on prescription drug container labels accessible to the about 7.4 million Americans who are blind or visually impaired. However, there continues to be a lack of awareness among a variety of stakeholders that these best practices exist. NCD, the agency charged with conducting a campaign to inform and educate stakeholders of these practices, has not had an effective plan to conduct its campaign and, consequently, conducted limited activities from July 2013 through August 2016. For example, the agency did not conduct most of its planned campaign activities in 2014 and conducted no activities in 2015. Although NCD now has a corrective action plan for activities it intends to conduct through fiscal year 2017, it has not assigned responsibilities for these activities and has not developed an evaluation strategy for its activities, which is inconsistent with federal internal control standards. Without ensuring these elements are in place, NCD will be unable to adjust its corrective action plan and assess whether the information it is providing on the best practices are effectively reaching its target audience. The Executive Director of NCD should assign responsibilities for conducting future campaign activities and develop an evaluation plan for its activities. We provided a draft of this report to the U.S. Access Board and NCD for comment. Both agencies provided written comments, which we have reprinted in appendixes II and III, respectively. The U.S. Access Board said that it found our report to be complete and accurate. In its written comments, NCD did not specifically state whether it agreed with our recommendation, but signaled its intention to revise its corrective action plan for conducting campaign activities through fiscal year 2017. NCD stated that it has reassessed its plan and is taking action to ensure ongoing compliance with federal internal control standards. NCD also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Executive Director of the U.S. Access Board, the Executive Director of the National Council on Disability, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. We developed a web-based questionnaire that included questions on 1. the extent to which pharmacies can provide accessible labels and have implemented the specific U.S. Access Board’s best practices for making information on prescription drug container labels accessible to individuals who are blind or visually impaired (henceforth referred to as best practices); 2. barriers that individuals who are blind or visually impaired face in accessing information on prescription drug container labels; and 3. factors that affect pharmacies’ implementation of the best practices and steps that could address any implementation challenges. We sent this questionnaire to pharmacy benefit managers (PBM) that operate mail order pharmacies that centrally fill prescriptions and send them directly to individuals; chain pharmacy companies that operate retail pharmacies in traditional pharmacies locations, supermarkets, and mass merchandise stores; and individual retail pharmacy locations—including chain pharmacies (those with four or more locations under common ownership) and independent pharmacies (those with three or fewer retail locations under common ownership): Four PBMs that manage prescription drug benefits for the four largest private insurers that sponsor Medicare Part D plans as of March 2016. To select PBMs, we analyzed Medicare Part D contract and enrollment data as of March 2016 from the Centers for Medicare & Medicaid Services, which were the most recent available data at the time began our work. Using these data, we identified the four private insurers that sponsor Medicare Part D plans with the largest percentage of Medicare Part D enrollment as of March 2016— representing a total of about 60 percent of Medicare Part D enrollees—and selected the four PBMs that managed the prescription drug benefits for these private insurers. Nine of the 10 largest chain pharmacy companies as of March 2016. To select these companies, we obtained data from the National Council of Prescription Drug Programs on the 16 chain pharmacy companies with the most retail pharmacy locations as of March 2016, which were the most recent available data at the time we began our data collection. We compared this list of pharmacies to data from the National Association of Chain Drug Stores on their members as of March 2016 and reconciled any differences between these two lists of data. We contacted the 10 largest chain pharmacy companies based on their number of retail pharmacy locations, which ranged from about 480 to over 9,700, and 9 agreed to participate in our study. Thirty-eight individual retail pharmacy locations that included both chain and independent pharmacies in metropolitan and non- metropolitan areas in the four states for which we interviewed the state pharmacy regulating bodies—California, Florida, Illinois, and Massachusetts. To make our selection, we obtained data as of May and June 2016 on the active licensed retail pharmacies in each of the four states—including pharmacy name, the county in which the pharmacies were located, and pharmacy license number. Then, using the U.S. Department of Agriculture’s 2013 Rural-Urban Continuum Codes data, which classifies counties by their population size and degree of urbanization, we coded pharmacies by the county in which they were located. We used the coded data to create two randomized lists—one for pharmacies in metropolitan counties and a second for pharmacies in non-metropolitan counties—to use for selection. Using these lists, we targeted two independent and two chain pharmacies from metropolitan counties, since most pharmacies were located in metropolitan areas, and one independent and one chain from non-metropolitan counties. During the development of our questionnaire, we pretested it with two randomly selected individual retail pharmacy locations (one chain and one independent pharmacy) to ensure that our questions and response choices were clear, appropriate, and answerable. We then made changes to the content of the questionnaire based on feedback obtained from the pretests. We administered the web-based questionnaire from July 2016 through September 1, 2016 and received responses from the 4 selected PBMs, 7 of the 9 selected chain pharmacy companies, and 18 of 38 randomly selected individual retail pharmacy locations. The 18 individual retail pharmacy locations represented 10 chain and 8 independent pharmacies in both metropolitan and non-metropolitan areas in all four of our selected states. In addition to the contact named above, individuals making key contributions to this report include Rashmi Agarwal, Assistant Director; Kristin Ekelund, Analyst-in-Charge; Melissa Duong; and John Lalomio. Also contributing were George Bogart, Carolyn Fitzgerald, Laurie Pachter, and Vikki Porter. | About 7.4 million Americans are blind or visually impaired and may face difficulty reading prescription drug container labels. FDASIA required the U.S. Access Board to develop best practices for accessible labels and NCD to conduct an informational campaign on these best practices. FDASIA also included a provision for GAO to review pharmacies' implementation of these best practices. This report examines: the extent to which pharmacies can and do provide accessible labels and implement the best practices; pharmacy challenges; and the extent to which NCD conducted its informational campaign, among others. GAO collected information from 55 stakeholders, including 4 PBMs used by large insurers; 9 of the largest chain pharmacy companies; 18 randomly selected individual retail pharmacy locations in 4 states with varying levels of visually impaired residents; and 24 others, such as state regulating bodies and advocacy and industry groups. GAO sent a web-based questionnaire to PBMs, chain pharmacy companies, and individual retail pharmacy locations. GAO also interviewed stakeholders and reviewed state regulations and documents from NCD. GAO found that some pharmacies can provide accessible prescription drug labels, which include labels in audible, braille, and large print formats and are affixed to prescription drug containers. Mail order pharmacies: Four pharmacy benefit managers (PBMs) used by large insurers that GAO contacted reported that they can provide accessible labels through their mail order pharmacies. Retail pharmacies: Six of the 9 largest chain pharmacy companies and 8 of the 18 selected individual retail pharmacy locations GAO contacted also reported that they can provide accessible labels through their store-based retail pharmacies. The percent of prescriptions dispensed with accessible labels was generally low—less than one percent of all prescriptions dispensed—according to some PBMs and chain pharmacy companies that GAO contacted. With regard to best practices, a working group convened by the U.S. Access Board—a federal agency that promotes accessibility for individuals with disabilities—developed and published 34 best practices for accessible labels. Four PBMs, six chain pharmacy companies, and eight individual retail pharmacy locations GAO contacted reported that they have generally implemented most of the 34 best practices for accessible labels. However, stakeholders GAO contacted said that individuals who are blind or visually impaired continue to face barriers accessing drug label information, including identifying pharmacies that can provide accessible labels. Stakeholders GAO contacted identified four key challenges that pharmacies faced in providing accessible labels or implementing the best practices: (1) lack of awareness of the best practices; (2) low demand and high costs for providing accessible labels; (3) technical challenges for providing these labels; and (4) an absence of requirements to implement the best practices. Many stakeholders identified greater dissemination of the best practices as a step, among others, that could help address some of these challenges. The National Council on Disability (NCD)—the federal agency responsible for conducting an informational campaign on the best practices, as required by the Food and Drug Administration Safety and Innovation Act (FDASIA)—has conducted limited campaign activities. Primarily in 2013 and 2014, NCD used its website and social media to disseminate an agency statement and press releases on the best practices. However, most stakeholders GAO spoke with said they had no communication with NCD about its campaign, and some said they were unaware of the best practices. Agency officials provided GAO with an original plan for conducting campaign activities through 2014, but most activities were not conducted. During the course of our review, NCD developed a corrective action plan for conducting future campaign activities. However, neither plan assigned responsibilities for conducting these activities nor does the agency have plans to evaluate them, which is inconsistent with federal internal control standards. Without assigning responsibilities and developing an evaluation plan, NCD will be unable to adjust its action plan and assess whether the information on the best practices is effectively reaching its target audience. NCD should assign responsibility for conducting campaign activities and evaluate these activities. NCD neither agreed nor disagreed with the recommendation but indicated that it is taking steps to address this issue. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
BIA’s irrigation program was initiated in the late 1800s, as part of the federal government’s Indian assimilation policy, and it was originally designed to provide economic development opportunities for Indians through agriculture. The Act of July 4, 1884, provided the Secretary of the Interior $50,000 for the general development of irrigation on Indian lands. Over the years, the Congress continued to pass additional legislation authorizing and funding irrigation facilities on Indian lands. BIA’s irrigation program includes over 100 “irrigation systems” and “irrigation projects” that irrigate approximately 1 million acres primarily across the West. BIA’s irrigation systems are non revenue-generating facilities that are primarily used for subsistence gardening and they are operated and maintained through a collaborative effort which generally involves other BIA programs, tribes, and water users. In contrast, BIA’s 16 irrigation projects charge their water users an annual operations and maintenance fee to fund the cost of operating and maintaining the project. Most of BIA’s irrigation projects are considered self-supporting through these operations and maintenance fees. The 16 irrigation projects are located on Indian reservations across the agency’s Rocky Mountain, Northwest, Southwest, and Western regions (see fig. 1). BIA’s management of the 16 irrigation projects is decentralized, with regional and local BIA offices responsible for day-to-day operations and maintenance. Table 1 provides the tribe or tribes served by each of the 16 irrigation projects along with the year each project was originally authorized. The irrigation facilities constructed by BIA included a range of structures for storing and delivering water for agricultural purposes. Figure 2 highlights an example of the key structural features found on BIA’s irrigation projects. The beneficiaries of BIA’s projects have evolved over time and at present are quite diverse. Over the years, non-Indians have bought or leased a significant portion of the land served by BIA’s irrigation program. As a result, current water users on BIA’s projects include the tribes, individual Indian landowners, non-Indian landowners, and non-Indian lessees of Indian lands. The extent of non-Indian landownership and leasing ranges significantly across BIA’s irrigation projects (see table 2). For example, 100 percent of the land served by the Colorado River Irrigation Project is Indian owned, while only about 10 percent of the land served by the Flathead Irrigation Project is Indian owned. Federal regulations and internal BIA guidance require that BIA collaborate with water users, both Indian and non-Indian, in managing the irrigation projects. For example, federal regulations state that close cooperation between BIA and water users is necessary and that the BIA official in charge of each project is responsible for consulting with all water users in setting program priorities. In addition, BIA’s manual requires that BIA “provide opportunities for water user participation in matters relating to irrigation project operations” and that BIA’s officer-in-charge “meet regularly with water users to discuss proposed [operation and maintenance] assessment rates … general operations and maintenance.” Although BIA guidance does not define “regularly,” BIA’s Irrigation Handbook explicitly recommends that project staff meet at least twice annually to discuss work performed over the course of the year and allow for water user feedback and suggestions for the coming year. Furthermore, BIA’s Irrigation Handbook states that, at a minimum, BIA should discuss annual project budgets and work plans with water users. Since their inception, BIA’s 16 irrigation projects have been plagued by maintenance concerns. Construction of the projects was never fully completed, resulting in structural deficiencies that have continually hindered project operations and efficiency. In addition, water users and BIA have reported that operations and maintenance fees provide insufficient funding for project operations. Due to insufficient funding, project maintenance has been consistently postponed, resulting in an extensive and costly list of deferred maintenance items. Such deferred maintenance ranges from repairing or replacing dilapidated irrigation structures to clearing weeds from irrigation ditches. In addition, concerns regarding BIA’s management of the projects have been raised for years, particularly in regard to its financial management practices. For example, problems concerning BIA’s billing practices for its operations and maintenance fees have been raised by many, prompting independent review on more than one occasion. We and the Department of the Interior’s Inspector General have both identified serious problems with the land use records BIA has used to develop its annual operations and maintenance bills. In response, BIA instituted a new financial management system called the National Irrigation Information Management System, which has begun to address some of the billing errors. However, concerns still exist regarding the accuracy of the data in the billing system. The accuracy of some of the information in the irrigation billing system is dependant on the irrigation program receiving accurate and timely information from other BIA programs, such as land ownership and leasing information from BIA’s Real Estate Services program. In 2001, the Yakama tribe and individual tribal members filed appeals challenging the Wapato Irrigation Project’s operation and maintenance fees for the pre-2000 and year 2000 bills. Furthermore, the Wapato Irrigation Project agreed to not send any bills to the tribe or its members since 2001. Although a settlement is under discussion, in the interim the Wapato Irrigation Project has not been able to collect about $2 million, annually, of its expected revenue. According to BIA’s latest estimate, it will cost about $850 million to complete the deferred maintenance on all of its 16 irrigation projects; but this estimate is still being refined. BIA initially estimated its deferred maintenance costs at over $1 billion in fiscal year 2004, but acknowledged that this estimate was preliminary and would need to be revised largely because it incorrectly included new construction items and was developed by non-engineers. BIA revised this estimate downward in fiscal year 2005 based on the implementation of a new facilities management system. However, BIA plans to further refine this estimate since some projects continued to incorrectly count new construction items as deferred maintenance. As part of its ongoing effort to identify the needs and costs of deferred maintenance on its 16 irrigation projects, BIA estimated in fiscal year 2004 that it would cost approximately $1.2 billion to complete all deferred maintenance. This initial estimate was based, in part, on preliminary condition assessments of irrigation structures and equipment for each of BIA’s 16 irrigation projects. These preliminary condition assessments generally consisted of visual inspections to classify each project’s structure and equipment using a scale of good, fair, poor, critical and abandoned based on the apparent level of disrepair. BIA staff then estimated how much it would cost to repair each item based on its condition classification. BIA generally defines deferred maintenance as upkeep that is postponed until some future time. Deferred maintenance varies from project to project and ranges from cleaning weeds and trees which divert water from irrigation ditches, to repairing leaky or crumbling check gates designed to regulate water flow, to resloping eroded canal banks to optimize water flow. Figure 3 shows examples of deferred maintenance on some of the irrigation projects we visited (clockwise from the upper left, figure 3 shows (1) a defunct check gate and overgrown irrigation ditch at the Fort Belknap Irrigation Project, (2) a cattle-crossing eroding a canal bank and impairing water flow at the Wind River Irrigation Project, (3) a crumbling irrigation structure at the Crow Irrigation Project, and (4) a check gate leaking water at the Colorado River Irrigation Project). For detailed information on key maintenance issues for each of the nine projects we visited, see appendix II. BIA officials acknowledged that their fiscal year 2004 deferred maintenance estimate was only a starting point and that it needed to be revised for three key reasons: (1) the individuals who conducted the assessments were not knowledgeable about irrigation projects or infrastructure; (2) not all projects used the same methodology to develop their deferred maintenance cost estimates; and (3) some projects incorrectly counted new construction items as deferred maintenance. BIA’s preliminary condition assessments were conducted by computer specialists, rather than by people with the expertise in irrigation or engineering needed to accurately assess project infrastructure. BIA contracted with geographic information system experts primarily to catalogue the structures on each project. These geographic information system experts also observed the condition of the structures they catalogued and classified the condition of each structure, based on the level of apparent disrepair, as part of the overall effort to inventory and map key structures on each project. Consequently, some items identified as being in “poor” condition may in fact be structurally sound but simply appear cosmetically dilapidated, whereas other structures classified as being in “good” condition may in fact be structurally dilapidated but appear cosmetically sound. For example, according to BIA staff at the Colorado River Irrigation Project, the recent repainting of certain check gates disguised severe rust and structural deterioration of key metal parts. BIA staff used inconsistent methodologies to develop the cost estimates for deferred maintenance. According to BIA staff, the deferred maintenance cost estimates were developed by different people, sometimes using different or unknown methodologies for assigning cost values to deferred maintenance items. For example, some projects developed their own cost estimates and sent them to BIA’s central office for inclusion in its overall figures, while BIA regional staff developed cost estimates for other projects based, in part, on information from BIA’s preliminary condition assessments. Some projects incorrectly included new construction items as deferred maintenance. According to BIA, work that would expand a project or its facilities should not be categorized as deferred maintenance. Therefore, expanding an existing water delivery system or constructing a new building is not deferred maintenance. However, some projects incorrectly counted new construction items as deferred maintenance. For example, the Fort Hall Irrigation Project included increasing the capacity of its main canal for about $15.3 million, the Duck Valley Irrigation Project included building new canals for about $1.3 million, and the Flathead Irrigation Project included building a new warehouse for about $147,000. To improve the accuracy of its deferred maintenance estimate in 2005 and to help staff develop, track, and continuously update deferred maintenance lists and cost estimates, BIA implemented MAXIMO—a facilities management system linked to the geographic information system mapping inventory developed from its preliminary condition assessments. Using data from MAXIMO, BIA revised its total deferred maintenance estimate for the irrigation projects downward to about $850 million for fiscal year 2005. Figure 4 shows the current deferred maintenance cost estimate for each of the 16 projects. In the summer of 2005, BIA technical experts from the central irrigation office conducted training for BIA irrigation projects on how to use MAXIMO to enter information on maintenance needs, and how to correctly define deferred maintenance. Projects used this system to revise their list of deferred maintenance items and associated cost estimates in fiscal year 2005. While MAXIMO is still being tailored to the needs of the irrigation program, its implementation generally standardized the process for identifying and calculating deferred maintenance among projects. Despite the implementation of MAXIMO, BIA’s fiscal year 2005 estimate of deferred maintenance is still inaccurate for the following reasons: Some projects continued to incorrectly count certain items as deferred maintenance. Despite training, some projects continued to incorrectly count certain items, such as new construction items and vehicles, as deferred maintenance. For example, the Fort Hall Irrigation Project included the installation of permanent diversion structures for about $2.1 million, the Wapato Irrigation Project included constructing reservoirs for about $640,000, and the San Carlos Indian Works Irrigation Project included building a new office for about $286,000. In addition, some projects included the cost of repairing vehicles or buying new ones in their deferred maintenance estimates, despite BIA’s new guidance that such items are not deferred maintenance. According to BIA officials, while projects can consider the weed clearing postponed due to broken vehicles as deferred maintenance, the delayed repair of the vehicle itself is not deferred maintenance. For example, the Wind River Irrigation Project included an excavator vehicle for about $500,000 and the Crow Irrigation Project included dump trucks for about $430,000. Some projects provided BIA with incomplete information. According to BIA officials, some projects did not do thorough assessments of their deferred maintenance needs, and some may not be including legitimate deferred maintenance items, such as re-sloping canal banks that have eroded by crossing cattle or overgrown vegetation. Moreover, both the Walker River and the Uintah Irrigation Projects failed to provide information detailing their deferred maintenance costs, and several projects lumped items together as “other” with little or no explanatory information other than “miscellaneous”—accounting for almost one- third of BIA’s total deferred maintenance cost estimate for its irrigation projects (see fig. 5). BIA made errors when compiling the total deferred maintenance cost estimates. For example, BIA inadvertently double-counted the estimate provided by the Colorado River Irrigation Project when compiling the overall cost estimate, according to BIA officials. Additionally, BIA officials erroneously estimated costs for all structures, such as flumes and check gates, based on the full replacement values even when items were in good or fair condition and needed only repairs. These structures account for over one-third of BIA’s total deferred maintenance estimate (see fig. 5). While the inclusion of incorrect items and calculation errors likely overestimate BIA’s total deferred maintenance costs, the incomplete information provided by some projects may underestimate total costs. To further refine its cost estimate and to develop more comprehensive deferred maintenance lists, BIA plans to hire experts in engineering and irrigation to periodically conduct thorough condition assessments of all 16 irrigation projects to identify deferred maintenance needs and costs. According to BIA officials, these thorough condition assessments are expected to more accurately reflect each project’s actual deferred maintenance, in part because experts in engineering and irrigation who can differentiate between structural and cosmetic problems will conduct them. These assessments will also help BIA prioritize the allocation of potential funds to complete deferred maintenance items because they will assign a prioritization rating to each deferred maintenance item based on the estimated repair or replacement cost as well as the overall importance to the project. The first such assessment was completed for the Flathead Irrigation Project in July 2005, and BIA plans to reassess the condition of each project at least once every 5 years, with the first round of such condition assessments completed by the end of 2010. BIA’s management of some of its irrigation projects has serious shortcomings that undermine effective decisionmaking about project operations and maintenance. Under BIA’s organizational structure, in many cases, officials with the authority to oversee project managers’ decisionmaking lack the technical expertise needed to do so effectively, while the staff who do have the expertise lack the necessary authority. In addition, despite federal regulations that require BIA to consult with project stakeholders in setting project priorities, BIA has not consistently provided the information or opportunities necessary for stakeholders— both Indian and non-Indian water users—to participate in decisionmaking about project operations and maintenance. (See appendix II for detailed information on key management concerns at each of the nine projects we visited.) Under BIA’s organizational structure, in many cases, officials with the authority to oversee project managers’ decisionmaking lack the expertise needed to do so effectively, while the staff who do have the expertise lack the necessary authority to oversee project managers’ decisionmaking. BIA regional directors, agency superintendents, and agency deputy superintendents who oversee the projects do not generally have engineering or irrigation expertise, and they rely heavily on the project managers to run the projects. (See fig. 6 for an organizational chart showing the lines of authority for providing oversight of a typical BIA irrigation project.) Of the nine projects we visited, only two had managers at the regional or agency levels who are experts in irrigation or engineering. At the same time, BIA staff with the irrigation and engineering expertise— regional irrigation engineers and central irrigation office staff—have no authority over the 16 projects under BIA’s current organizational structure. Consequently, key technical decisions about project operations and maintenance, such as when or how to repair critical water delivery infrastructure, do not necessarily get the technical oversight or scrutiny needed. This organizational structure and reliance on the project managers breaks down when the person managing the project lacks the expertise required for the position—that is, in cases in which BIA has had difficulty filling project manager vacancies and has, as a result, hired less qualified people or has the agency deputy superintendent temporarily serving in the project manager position. Of the nine projects we visited, four lacked project managers for all or part of the 2005 irrigation season and five project managers were experts in engineering or irrigation. The GAO Internal Control Management and Evaluation Tool recommends that federal agencies analyze the knowledge and skills needed to perform jobs appropriately and provides guidance on organizational structure and identification of potential risks to the agency in that structure. Specifically, it recommends that adequate mechanisms exist to address risks—such as the risks associated with staff vacancies or hiring less qualified staff. When the project manager is under-qualified and unchecked by managers who heavily rely on his or her decisionmaking, the potential for adverse impacts on the operations and maintenance of an irrigation project increases. For example, at the Crow Irrigation Project in 2002, a project manager with insufficient expertise decided to repair a minor leak in a key water delivery structure by dismantling it and replacing it with a different type of structure. The new structure was subsequently deemed inadequate by BIA’s irrigation experts, and the required reconstruction delayed water delivery by about a month. In addition, at the Blackfeet Irrigation Project in 2000, the accidental flooding and subsequent erosion of a farmer’s land was inadequately addressed by project and agency management who decided to use a short-term solution over the objections of the regional irrigation engineer, who lacked the authority to override the project manager and agency superintendent’s technical decision, despite their lack of expertise. At the time of this report, the regional irrigation engineer continues to negotiate the implementation of a long-term and technically sound solution. Furthermore, BIA lacks protocols to ensure that project managers consult with, or get input from, BIA’s technical experts before implementing technically complex decisions about project operations and maintenance, further exacerbating problems and undermining management accountability. For example, in the 2002 incident at the Crow Irrigation Project discussed above, the project manager was not required to consult with, notify, or get approval from either the regional irrigation engineer or central irrigation office staff, despite his lack of expertise and the complexity of the flume replacement project he undertook. According to BIA officials, if the project manager had consulted an engineer, his plan to replace the flume with two small culverts would have been rejected before work began because it was technically insufficient and would not have been completed before the start of the approaching irrigation season. A second serious management shortcoming is the extent to which some projects involve water users in decisionmaking. Federal regulations, as well as BIA guidance, call for involving project stakeholders—that is, tribal representatives as well as both Indian and non-Indian water users—in the operations and maintenance of each project. Specifically, federal regulations state that BIA is responsible for consulting with all water users in setting program priorities; BIA’s manual requires that BIA provide regular opportunities for project water users to participate in project operations; and BIA’s Irrigation Handbook recommends that BIA meet at least twice a year with project water users to discuss project budgets and desired work. Despite such requirements and recommendations, BIA has not consistently provided the opportunities or information necessary for water users to participate in such decisionmaking about project operations and maintenance. The frequency of meetings between BIA and its project water users varied considerably on the nine projects we visited, from rarely (generally zero meetings per year), to periodically (generally more than one meeting per year), to regularly (generally more than three meetings per year), as shown in figure 9. For example, both the Blackfeet and Colorado River Irrigation Projects hold regular meetings with both tribal and individual water users, with meetings held quarterly at the Blackfeet Irrigation Project and monthly at the Colorado River Irrigation Project. In contrast, BIA officials on the Pine River Irrigation Project do not meet with any non-tribal water users, and BIA officials at the Fort Belknap Irrigation Project have held few water users meetings in recent years. There was no meeting with water users at the Fort Belknap Irrigation Project to kick-off the 2005 irrigation season because the project manager position was vacant, worsening an already adversarial relationship between water users and BIA, according to water users and a local government official. Also, BIA officials on the Crow Irrigation Project have no regularly scheduled meetings with either the tribe or individual water users and, in fact, failed to send a single representative to the meeting it called in 2005 for water users to voice their concerns about project management and operations. In addition to a lack of regular meetings with all project water users, BIA has not consistently shared the type of information about project operations and finances that water users need to meaningfully participate in project decisionmaking. Although BIA officials at the Colorado River Irrigation Project share information on their budgets with water users and work collaboratively with water users to develop annual work priorities in accordance with BIA’s Irrigation Handbook, not all projects we visited provide or solicit this type of information. For example, BIA staff at the Wapato Irrigation Project does not solicit water users’ input on project priorities or share information on the project’s budget, according to water users we spoke with, and BIA officials at the Crow Irrigation Project do not share this type of critical information. However, some of the projects we visited have recently begun to share information on project spending and involve project water users in developing project priorities, despite not doing so historically. For example, the project management at the Blackfeet Irrigation Project began sharing budget information with its water users during the 2005 season, and the new project management at the Fort Belknap Irrigation Project stated that they plan on involving project water users in setting project priorities in the 2006 season. Moreover, although some project managers and their staff are approachable and responsive on an individual basis, according to water users on some projects we visited, others stated that project management on some of BIA’s irrigation projects were generally inaccessible and non- responsive. For example, BIA officials acknowledged that a former project manager at the Blackfeet Irrigation Project told water users to sue BIA to get information on project decisionmaking. In addition, some expressed concerns that BIA is less responsive to non-Indians because BIA’s mission does not specifically include non-Indians. Consequently, some non-Indian water users have opted to go directly to their congressional representatives to raise their concerns. For example, non-Indian water users at the Wapato Irrigation Project have sought congressional intervention on several occasions to help compel BIA staff to disclose information about project finances, such as information related to proposed operations and maintenance fee debts and data on project land not being billed for operations and maintenance. In addition, Senator Conrad Burns and Congressman Dennis Rehberg of Montana co-sponsored a town hall meeting in 2003 to provide local water users an opportunity to voice project concerns to BIA officials. Requests by non-Indian water users for project management and regional staff to address the lack of water delivery at the Crow Irrigation Project during the month of August 2005 went largely unanswered by BIA, resulting in congressional intervention. Such lack of access and communication about project operations limits the ability of water users to have an impact on project decisions as well as the ability of BIA to benefit from this input. The long-term direction of BIA’s irrigation program depends on the resolution of several larger issues. Of most importance, BIA does not know the extent to which its irrigation projects are capable of financially sustaining themselves, which hinders its ability to address long-standing concerns regarding inadequate funding. The future of BIA’s irrigation program also depends on the resolution of how the deferred maintenance will be funded. BIA currently has no plans for how it will obtain funding to fix the deferred maintenance items, and obtaining this funding presents a significant challenge in times of tight budgets and competing priorities. Finally, it might be more appropriate for other entities, including other federal agencies, tribes, and water users, to manage some or all of the projects. BIA does not know the extent to which Indian irrigation projects are capable of sustaining themselves. Reclamation law and associated policy require the Department of the Interior’s Bureau of Reclamation to test the financial feasibility of proposed projects comparing estimated reimbursable project costs with anticipated revenues. The Bureau of Reclamation then uses these reimbursable cost estimates to negotiate repayment contracts with water users, where appropriate. In contrast, Indian irrigation projects were authorized to support Indian populations residing on reservations without regard to whether the projects could be financially self-sustaining. As a result, neither the Congress nor project stakeholders have any assurance that these projects can sustain themselves. For example, a comprehensive 1930 study of BIA’s irrigation program concluded that the Blackfeet and Fort Peck Irrigation Projects should be abandoned. Specifically, the report noted, “fter a very careful study of all the available data relating to these projects, including a field examination, we are firmly convinced that any further attempts to rehabilitate and to operate and maintain these projects … can result only in increasing the loss that must be accepted and sustained by the Government. Adequate preliminary investigations and studies to which every proposed project should be subjected, in our opinion, would have condemned … these … projects as unfeasible.” Despite this lack of information on the overall financial situation for each of the projects, in the early 1960s BIA classified more than half of its 16 projects as fully self-supporting, on the basis of annual operations and maintenance fees they collected from water users. These self-supporting projects do not receive any ongoing appropriated funds. These projects are subject to full cost recovery despite the absence of financial information to demonstrate that the water users could sustain this financial burden. The Blackfeet and Fort Peck Irrigation Projects were two of the projects classified as fully self-supporting. While the specific financial situations for the Blackfeet and Fort Peck Irrigation Projects have likely changed since the 1920s, BIA does not know if these projects, or any of the other Indian irrigation projects, are financially self-supporting. The heavy reliance on water users to sustain these projects has created ongoing tension between the water users and BIA. Some water users have complained to BIA that they cannot afford the operations and maintenance fees and they pressure BIA to keep the fees as low as possible. The Bureau of Reclamation recently conducted a study of the Pine River Irrigation Project and concluded that some of the water users could not conduct a profitable farming operation with the 2005 operations and maintenance fee of $8.50 per acre. BIA has not responded to the Bureau of Reclamation study, and in October 2005 BIA proposed doubling the rate to $17.00 per acre for the 2006 irrigation season even though water users claim that they cannot afford to pay a higher fee. The operations and maintenance fee has been set at $8.50 at the Pine River Irrigation Project since 1992 and, according to BIA officials, the collections do not provide adequate funds to properly operate and maintain the project. As a result, BIA estimates that the deferred maintenance at the project has grown to over $20 million. Without definitive information on the financial situation of each project, BIA cannot determine what portion of project operations and maintenance costs can be reasonably borne by the water users and to what extent alternative sources of financing, such as congressional appropriations, should be pursued. Despite the estimated $850 million in deferred maintenance and the degree to which it impedes ongoing operations and maintenance at BIA’s irrigation projects, BIA currently has no plan for funding the list of deferred maintenance items. Funding deferred maintenance costs in the hundreds of millions of dollars will be a significant challenge in times of tight budgets and competing priorities. Nonetheless, officials stated that the agency has made little effort to identify options for funding the deferred maintenance. BIA acknowledges that income from ongoing operations and maintenance fees would likely be inadequate to cover the deferred maintenance, yet the agency has done little to identify alternative means of funding. According to officials, BIA has not asked the Congress for supplemental funding to cover the deferred maintenance. For example, water users report that the $7.5 million appropriated for BIA’s irrigation projects for fiscal year 2006 resulted from lobbying by concerned water users, not from BIA’s efforts. To date, BIA has primarily focused on developing and refining an accurate estimate of the cost to fix the deferred maintenance items. While developing an estimate of the projected cost is important, BIA officials believe that the agency also needs to develop a plan for ultimately funding the deferred maintenance. Developing a plan for funding the deferred maintenance is complicated by competing priorities and a crisis-oriented management style that complicates preventative maintenance, according to BIA officials. The current state of disrepair of most of the irrigation projects results in frequent emergency situations concerning project operations and maintenance. As a result, BIA irrigation staff spends a significant amount of its time addressing emergency maintenance situations, to the detriment of other maintenance needs that are essential to sustaining the projects over the long term. As a result of this “crisis-style” management, BIA has limited time to devote to non-emergency issues such as the list of deferred maintenance items. Furthermore, this “crisis-style” management prevents BIA from devoting adequate time to preventative maintenance. For example, irrigation staff at Wind River Irrigation Project stated that making “band-aid” emergency repairs on a regular basis prevents them from addressing long-standing deferred maintenance needs, as well as from conducting strategic improvements that would help sustain the project over the long term. It may be beneficial to consider whether other groups for whom irrigation is a priority or an area of expertise could better manage some of the irrigation projects, including other federal agencies, Indian tribes, and water users. BIA must balance its irrigation management responsibilities with its many other missions in support of Indian communities. As the federal agency charged with supporting Indian communities in the United States, BIA’s responsibility is to administer and manage land and natural resources held in trust for Indians by the U.S. government. Administration and management of these trust lands and resources involves a wide variety of responsibilities, including law enforcement, social services, economic development, education and natural resource management. Given the multitude of responsibilities that BIA must balance, there are inherent limits on the resources and knowledge that BIA is able to devote to any one program. As a result of these limitations and competing demands, officials report that irrigation management is not a priority for BIA. The fact that many water users on the irrigation projects are now non-Indian may further encourage BIA to prioritize and devote more resources to other programs before irrigation management. Successful management of the irrigation projects by other groups would depend on the unique characteristics of each project and its water users. Potential groups who may be able to assume management for some irrigation projects or portions of some irrigation projects include the following: The Bureau of Reclamation. As the federal agency charged with managing water in the western United States, the Bureau of Reclamation has extensive technical experience in managing irrigation projects and has served in a technical or advisory capacity to BIA’s irrigation staff. Furthermore, efforts have been made in the past to turn over some BIA irrigation projects to the Bureau of Reclamation and the Fort Yuma Irrigation Project is currently operated by the Bureau of Reclamation. In addition, the Bureau of Reclamation utilizes management practices for its irrigation projects that maximize information sharing and collaboration with water users. For example, in contrast to BIA, the Bureau of Reclamation delegates responsibility for much of the day-to-day operations and maintenance on its irrigation projects to irrigation districts, which are organized groups of water users. Indian Tribes. Officials report that some of the tribes have staff with extensive knowledge of irrigation and water management, as well as technical training. Some tribes stated that they have a vested interest in seeing their respective projects succeed, and they would like to assume direct responsibility for their reservation’s irrigation project, assuming the deferred maintenance items are fixed before the turnover occurs. Turning over some of the BIA projects to Indian tribes would be an option where tribes have the management and technical capability to assume responsibility for an irrigation project. Water Users. Water users have extensive familiarity with the day-to-day management of the projects and in some cases already handle many day-to-day operations and maintenance activities. For example, the Crowheart Water Users Association, a group of water users at the Wind River Irrigation Project, have successfully assumed responsibility for most of the maintenance needs on their portion of the project. In exchange for their efforts, BIA refunds to the Crowheart Water Users Association 50 percent of their annual operation and maintenance fees. Through this arrangement, the Crowheart Water Users Association believes it has been able to more effectively address maintenance needs and increase project efficiency. Turning over some of the BIA projects to water users would be an option where water users share similar interests and have positive working relationships, as well as the desire to organize an irrigation district or association. Any successful alternative management option would have to consider the sometimes disparate interests and priorities among water users. In some cases, a combination of the various alternative management options may be beneficial and feasible. This type of arrangement is currently being considered for the Flathead Irrigation Project, where BIA is currently in the process of turning over the operation and management of the project to a collaborative management group that may include the tribe, individual Indian water users, and non-Indian water users. However, regardless of the alternative management option, water users and tribal officials repeatedly stated that they would not be willing or able to take over project operations and maintenance unless the deferred maintenance had already been addressed or adequate funding was available to address the deferred maintenance needs. Since BIA historically has not had adequate funds to operate and maintain the projects, the projects are in a serious state of disrepair. BIA is in the process of implementing its plan to develop an accurate list and estimate of the deferred maintenance needs for each project. However, some of the projects also have day-to-day management shortcomings regarding technical support and stakeholder involvement that need to be addressed. BIA’s decentralized organizational structure combined with the difficulty in attracting and retaining highly qualified project managers at remote Indian reservations led to some poor decisionmaking at some of the projects. It is critically important that project managers, especially those with less than desirable qualifications, have the necessary level of technical support to prevent poor decisions from being made in the future. A lack of adequate stakeholder involvement at some projects has also seriously undermined project accountability. Unlike most other BIA programs, the operations and maintenance of the irrigation projects are funded almost entirely by the project beneficiaries—the water users, many of whom are non-Indian. Consequently, BIA is accountable to these water users and these water users expect to have an active voice in project operations and maintenance. Some projects have not fulfilled their obligations to regularly meet with project stakeholders, creating an adversarial environment in which BIA and project water users do not trust each other. This failure to involve stakeholders in the management of their own projects means that BIA does not benefit from water user expertise and has resulted in widespread feelings that BIA is non-responsive and evasive, alienating many water users who feel disenfranchised. Moreover, this failure has limited the ability of stakeholders to hold BIA accountable for its decisions and actions. In addition to some shortcomings with BIA’s ongoing day-to-day management of some of the projects, we also found that information on the financial sustainability of the projects is needed to help address the long- term direction of BIA’s irrigation program. BIA’s 16 irrigation projects were generally built in the late 1800s and early 1900s to further the federal government’s Indian policy of assimilation. The government made the decision to build these projects to support and encourage Indians to become farmers. This decision was generally not based on a thorough analysis designed to ensure that only cost effective projects were built. As a result, the financial sustainability of some of the projects has always been questionable, ultimately creating tension between BIA and its water users. BIA is under constant pressure to raise annual operations and maintenance fees to collect adequate funds to maintain the projects, while many water users contend that they do not have the ability to pay higher fees. Without a clear understanding of the financially sustainability of the projects, BIA does not know whether it is practical to raise operation and maintenance fees, or whether alternative sources of financing should be pursued. Information on financial sustainability, along with accurate deferred maintenance information, are both critical pieces of information needed to have a debate on the long-term direction of BIA’s irrigation program. Once this information is available, the Congress and interested parties will be able to address how the deferred maintenance will be funded and whether entities other than BIA could more appropriately manage some or all of the projects. We recommend that the Secretary of the Interior take the following three actions. To improve the ongoing management of the projects in the short-term, we recommend that the Secretary direct the Assistant Secretary for Indian Affairs to provide the necessary level of technical support for project managers who have less than the desired level of engineering qualifications by putting these projects under the direct supervision of regional or central irrigation office staff or by implementing more stringent protocols for engineer review and approval of actions taken at the projects; and require, at a minimum, that irrigation project management meet twice annually with all project stakeholders—once at the end of a season and once before the next season—to provide information on project operations, including budget plans and actual annual expenditures, and to obtain feedback and input. To obtain information on the long-term financial sustainability of each of the projects, we recommend that the Secretary direct the Assistant Secretary for Indian Affairs to conduct studies to determine both how much it would cost to financially sustain each project, and the extent to which water users on each project have the ability to pay these costs. This information will be useful to congressional decisionmakers and other interested parties in debating the long-term direction of BIA’s irrigation program. We provided the Department of the Interior with a draft of this report for review and comment. However, no comments were provided in time to be included as part of this report. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of the Interior, the Assistant Secretary for Indian Affairs, as well as to appropriate Congressional Committees, and other interested Members of Congress. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. If you or your staff have questions about this report, please contact me at (202) 512-3841 or [email protected]. Key contributions to this report are listed in appendix III. We were asked to address several issues concerning the Department of the Interior’s Bureau of Indian Affairs’ (BIA) management of its 16 irrigation projects. Specifically, we were asked to examine (1) BIA’s estimated deferred maintenance cost for its 16 irrigation projects; (2) what shortcomings, if any, exist in BIA’s current management of its irrigation projects; and (3) any issues that need to be addressed to determine the long-term direction of BIA’s irrigation program. For all three objectives, we collected documentation on BIA’s 16 irrigation projects from officials in each of BIA’s central Irrigation, Power, and Safety of Dams offices (central irrigation offices) located in Washington, D.C., and other locations in the western United States. We also visited and collected information from each of BIA’s four regional offices that oversee the 16 irrigation projects, including the Rocky Mountain, Northwest, Western, and Southwest regions. In addition, we visited 9 of the 16 projects located across all 4 regions. Specifically, we visited: (1) the Blackfeet Irrigation Project, (2) the Colorado River Irrigation Project, (3) the Crow Irrigation Project, (4) the Fort Belknap Irrigation Project, (5) the Pine River Irrigation Project, (6) the San Carlos Indian Works Irrigation Project, (7) the San Carlos Joint Works Irrigation Project, (8) the Wapato Irrigation Project, and (9) the Wind River Irrigation Project. We selected these projects based on a combination of factors aimed at maximizing our total coverage (over 50 percent of the projects), visiting at least one project in each of the regions where irrigation projects are located, visiting the project with the highest deferred maintenance cost estimate in each region using BIA’s fiscal year 2004 data, and visiting what BIA considered to be the three best projects and the five worst projects. During the site visits, we collected project- specific information from BIA officials and project stakeholders including tribes and water users. We also met with and collected documentation from the Department of the Interior’s Bureau of Reclamation, the federal agency charged with managing water in the western United States, for comparative purposes. To examine BIA’s estimated deferred maintenance cost for its 16 irrigation projects, we toured each of the 9 projects we visited to see examples of deferred maintenance and their impact, and we reviewed BIA’s lists of deferred maintenance items and associated cost estimates for both fiscal years 2004 and 2005. We also reviewed the methodology BIA used to develop these lists and estimates and interviewed BIA staff involved in developing these lists and estimates to identify major deficiencies. Although we analyzed the cost estimates provided by BIA, we did not develop our own estimate of deferred maintenance. To assess the reliability of data we received from BIA on deferred maintenance, we interviewed officials most knowledgeable about the collection and management of these data. We reviewed the relevant controls and found them adequate. We also conducted tests of the reliability of the computerized data. On the basis of these interviews, tests, and reviews, we concluded that BIA’s estimates of deferred maintenance were sufficiently reliable for the purposes of this report. To examine what shortcomings, if any, exist in BIA’s current management of its irrigation projects, we reviewed relevant federal regulations and agency guidance, and analyzed BIA-wide and project-specific management protocols and systems for the nine projects we visited. We also reviewed general guidance on internal control standards, including risk assessment, monitoring, and information and communication. We interviewed BIA officials from the central irrigation office in Washington, D.C., Colorado, Oregon, Arizona and Montana. We also interviewed BIA regional officials as well as agency and project officials associated with each of the 9 projects we visited for information on key shortcomings in BIA’s management of its irrigation projects. Finally, we interviewed a variety of project stakeholders—including tribal representatives, individual Indian water users, and non-Indian water users—at each of the 9 projects we visited for information on key shortcomings in BIA’s management. Finally, to examine any issues that need to be addressed to determine the long-term direction of BIA’s irrigation program, we reviewed previous studies highlighting key issues impacting the future of BIA’s irrigation program. This included reviewing previous studies conducted by GAO, the Department of the Interior’s Office of Inspector General, and the Bureau of Reclamation, as well as other studies conducted at the request of the Congress. We also reviewed relevant federal regulations and agency guidance, as well as historical information relevant to BIA’s management of the irrigation program, including budget information and agency memos. Finally, we interviewed BIA officials from the central irrigation office, regional offices, and the 9 projects we visited for information on the key challenges impacting the long-term direction of the program. We also interviewed project stakeholders—including tribal representatives and water users—at the 9 projects we visited for information on the key issues impacting the future direction of BIA’s irrigation program. We performed our work between March 2005 and February 2006 in accordance with generally accepted government auditing standards. This appendix contains brief profiles of the nine irrigation projects we visited. Each project profile begins with a short overview of basic facts about the project, followed by a set of bullet points describing the key operations and maintenance concerns and the key management concerns expressed to us by BIA officials, tribal officials, or water users during our site visits. The Blackfeet Irrigation Project was authorized for construction in 1907, but construction was never completed. It consists of 38,300 acres being assessed operations and maintenance fees (and 113,100 acres authorized for irrigation). The project is located in Browning, Montana on the Blackfeet Indian Reservation of Montana, home of the Blackfeet Tribe. About 60 percent of the project’s land is owned by either the tribe or individual tribal members, and about 40 percent is owned by non-Indians. BIA currently estimates the project’s total deferred maintenance costs to be $29,130,222. See figure 8 below for pictures of the Blackfeet Irrigation Project. Fees are insufficient to cover the costs of project operations and maintenance. Weeds and overgrown vegetation are problematic and impair water flow. Deferring maintenance has led to bigger and more costly maintenance problems. Deferring maintenance decreases water efficiency and access to water. The project as built cannot meet the increased demand for water. Communication between BIA and the water users could be improved, such as enhancing transparency, increasing involvement, and meeting separately with the tribe. Lack of training and expertise undermines BIA’s management of the project. Inadequate oversight within BIA exacerbates problems associated with lack of training and expertise. Project staff should report to managers with expertise in irrigation and/or engineering. BIA protocols are too vague, such as when project staff should consult with regional or central irrigation office engineers. BIA needs to be able to measure water in order to better manage water deliveries and identify critical problems. Irrigation is a low priority for BIA. The Colorado River Irrigation Project was the first BIA irrigation project built, authorized for construction in 1867, but construction was never completed. It is now considered the best of BIA’s 16 revenue-generating irrigation projects due, in part, to its innovative leadership and customer service attitude. The project has adopted a user fee system that measures and assesses water users based on their actual usage as well as charging water users additional fees for using more water than their individual allotment. The project is located in Parker, Arizona on the Colorado River Indian Reservation, home of the Colorado River Indian Tribes. The project, which has a 10-month-long irrigation season, consists of 79,350 assessed acres (and 107,588 acres authorized for irrigation), and is composed entirely of Indian land—land owned by the tribe or its members. BIA currently estimates the project’s total deferred maintenance costs to be $134,758,664. See figure 9 for pictures of the Colorado River Irrigation Project. Development leases may no longer be allowed, potentially resulting in irrigable land going un-irrigated and costing the tribe and project potential revenues. Replacement of deteriorating irrigation structures needed. Canal needs new lining due to years of deterioration and, in some cases, poor construction. Clearing moss and pondweed is needed lest the flow of water be impaired. New irrigation structures needed to regulate water flow where ditches converge. Understaffing and high turnover of project system operators adversely impact water deliveries in that there are too few system operators to deliver water in a timely manner. BIA procurement and contracting is time-consuming and costly. Annual project budget may understate actual funding because it does not include possible additional fees. Operations and maintenance fees can only be used to address operations and maintenance on the existing project, rather than expand the project. Crow Irrigation Project The Crow Irrigation Project was authorized for construction in 1890, but construction was never completed. It is one of the oldest of BIA’s 16 revenue-generating irrigation projects with 38,900 acres being assessed operations and maintenance fees (and 46,460 acres authorized for irrigation). The project is located in Crow Agency, Montana on the Crow Reservation, home of the Crow Tribe of Montana. About 56 percent of the project land is owned by either the tribe or individual tribal members, and about 44 percent is owned by individual non-Indians. BIA currently estimates the project’s total deferred maintenance costs to be $54,550,496. See figure 10 for pictures of the Crow Irrigation Project. Fees are insufficient to cover the project’s operations as well as maintenance costs. Weeds, overgrown vegetation, tree roots and garbage impair water flow in the canals and ditches. Crumbling or dilapidated irrigation structures impair water delivery. The repair of Rotten Grass Flume needs further work. Canal erosion causes sink holes and impairs water flow. Deferred maintenance of certain structures leads to safety concerns, such as when BIA staff must go into the canal to raise or lower broken check gates. The project’s recently reassigned project manager was under-qualified, resulting in some decisions that hurt the project and undermine water delivery, such as the Rotten Grass Flume incident. BIA has inadequate oversight of the project manager and his decisions. BIA relies on “crisis-style” management rather than a long-term plan to manager project. Allegations that a former project manager inappropriately used fees and was not accountable for financial decisions. Communication breakdown between BIA and its water users. The project may be better managed if BIA turned over the project’s management to water users or tribe. Irrigation is a low priority for BIA. The Fort Belknap Irrigation Project was authorized for construction in 1895, but construction was never completed. It is one of the smallest of BIA’s 16 revenue-generating irrigation projects with 9,900 acres being assessed operations and maintenance fees (and 13,320 acres authorized for irrigation). The project is located in Harlem, Montana on the Fort Belknap Reservation, home of the Fort Belknap Indian Community of the Fort Belknap Reservation of Montana. About 92 percent of the land is owned by either the tribe or individual tribal members, and about 8 percent is owned by individual non-Indians. BIA currently estimates the project’s total deferred maintenance costs to be $17,535,494. See figure 11 for pictures of the Fort Belknap Irrigation Project. Fees and appropriations are insufficient to cover the project maintenance needs. Weeds and overgrowth of vegetation impair water flow. Canal erosion caused by cattle-crossings impairs water flow. Deteriorated and leaking irrigation structures impair water delivery. Additional equipment is needed to conduct maintenance on project. Deferred maintenance exacerbates problems of poor farming land and low crop values. Poor communication and tense relations between BIA and water users. Staff turnover and difficulty finding qualified staff are problematic. Some project staff lack adequate expertise and training to manage project. Lack of transparency and water management plan limits BIA accountability. Some water users want BIA to begin water delivery earlier in season. The Pine River Irrigation Project is the only one of BIA’s 16 revenue- generating irrigation projects located in the Southwest region, with 11,855 acres being assessed operations and maintenance fees. Construction on the project was never completed. The project is located in Ignacio, Colorado on the Southern Ute Reservation, home to the Southern Ute Indian Tribe of the Southern Ute Reservation, Colorado. About 85 percent of the land is owned by either the tribe or individual tribal members, and about 15 percent is owned by individual non-Indians. BIA currently estimates the project’s total deferred maintenance costs to be $20,133,950. See figure 12 for pictures of the Pine River Irrigation Project. Collections from operations and maintenance fees do not provide adequate funds to properly operate and maintain the project. The project’s operations and maintenance fees have not been raised since 1992. BIA has proposed doubling the fees from $8.50 per acre to $17.00 per acre for the 2006 irrigation season. The project’s cash reserves were depleted in 2004. The project has a number of old water delivery contracts, referred to as “carriage contracts,” from the 1930s that are at low fixed rates. Under some of the contracts the water users only pay $1.00 per acre to the project. The practice of subsidizing the project through other BIA programs, such as Natural Resources, Roads Construction, Roads Maintenance and Realty, was scheduled to end at the end of fiscal year 2005. Alternative sources of funds must be found for the project manager and clerk positions. “Crisis-style” management only, no preventive maintenance. Project staff does not formally meet with or provide information to individual water users. A Bureau of Reclamation study in 1999 found that some of the water users could not afford to pay fees of $8.50 to the project and operate a profitable farming operation. BIA has not responded to the study. The former project manager stated that the BIA irrigation projects should be turned over to the Bureau of Reclamation. The San Carlos Indian Works Irrigation Project was authorized for construction in 1924, but construction was never completed. It is one of the newest of BIA’s 16 revenue-generating irrigation projects with 50,000 acres being assessed operations and maintenance fees (and 50,546 acres authorized for irrigation). The project, also referred to as Pima, is located in Sacaton, Arizona on the Gila River Indian Reservation, home of the Gila River Indian Community. It is served both by its own infrastructure and by that of the San Carlos Joint Works Irrigation Project. The project land is generally owned by the tribe or tribal members, with about 99 percent of the land owned by either the tribe or individual tribal members, and about 1 percent owned by individual non-Indians. BIA currently estimates Pima’s total deferred maintenance costs to be $62,865,503. See figure 13 for pictures of the San Carlos Indian Works Irrigation Project. Inefficiency in water delivery results in fewer water users being able to receive water, leading to idle acreage in some cases. Clearing tumbleweeds and other vegetation that can clog culverts are a recurring problem and represents a large part of the project’s spending on operations and maintenance. Erosion is a continuing problem, in part, because the canal is used for both water deliveries as well as drainage. BIA staff has a “wish list” of items that would bring the project into top condition, extending beyond the basic deferred maintenance. Project infrastructure may not have the capacity to deliver water to all potential water users. 2007 turnover to water users is still underway. Insufficient reserve funds means that project staff may not have enough money to conduct needed maintenance towards the end of the year. Vacancies are a constant problem at the project, leaving too few staff to conduct project maintenance. BIA is too slow to respond to water users’ requests for repairs. The San Carlos Joint Works Irrigation Project was authorized for construction in 1924, but construction was never completed. It provides water to non-Indian irrigators as well as the San Carlos Indian Works Irrigation Project. It consists of 100,000 acres being assessed operations and maintenance fees (and 100,546 acres authorized for irrigation), with 50 percent of the land owned by non-Indian irrigators and 50 percent owned by Indian irrigators (in the form of the San Carlos Indian Works Irrigation Project). The project is located in Coolidge, Arizona. BIA currently estimates Coolidge’s total deferred maintenance costs to be $5,775,427. See figure 14 for pictures of the San Carlos Joint Works Irrigation Project. Lack of certainty in BIA’s ability to deliver requested water to all water users has led some to purchase additional water from outside of the project. Silt removal from irrigation canals and ditches is a recurring problem, leading BIA to purposefully over-excavate the main canal each year in an attempt to catch excess silt that can clog culverts and prevent water delivery impairments. Repair of China Wash Flume is an expensive undertaking, but the flume’s failure could jeopardize water deliveries for much of the project. Removal of weeds to prevent clogged culverts is a recurring problem for the project. 2007 turnover to water users is under way but not finalized. Lawsuit against BIA’s increase in operations and maintenance fees resulted in some water delivery delays while the lawsuit is pending. Contracting delays within BIA have resulted in postponed project maintenance. Turnover of BIA staff and lack of water user inclusion in project decisionmaking impedes effective communication. BIA lacks accountability to water users in terms of how it spends operations and maintenance fees. The Wapato Irrigation Project is one of the oldest and largest of BIA’s 16 revenue-generating irrigation projects with 96,443 acres being assessed operations and maintenance fees (and 145,000 acres authorized for irrigation). It was authorized for construction in 1904, but construction was never completed. The project is located in Yakima, Washington on the Yakama Reservation, home of the Confederated Tribes and Bands of the Yakama Nation. About 60 percent of the project land is owned by either the tribe or individual tribal members, and about 40 percent is owned by individual non-Indians. BIA currently estimates the project’s total deferred maintenance costs to be $183,128,886. See figure 15 for pictures of the Wapato Irrigation Project. Deterioration of project prevents some water users from receiving water. Lack of regular project maintenance has led many water users to make repairs on their own in order to irrigate crops. Water users claim that project staff performs inadequate or faulty repairs, resulting in wasted operations and maintenance payments or the need for water users to fix the sloppy repairs. Fees are insufficient because (a) rates have been set too low, and (b) the tribe’s appeal of BIA’s operations and maintenance bills since 2001 has decreased income by at least $2 million annually because the agency will not collect on these bills or issue subsequent bills until the matters raised in the appeal are resolved. Fees are insufficient to cover both maintenance and administrative costs, such as salaries and benefits, leading to suggestions that BIA cover such costs. Understaffing due to inadequate funds and difficulty in finding qualified staff has resulted in too few staff to operate and maintain project. BIA relies on “crisis-style” management to manage project, resulting in a lack of planning and preventive maintenance. Water users lack voice in project decisionmaking, resulting in concerns about limited accountability of project staff to its water users. Alleged errors with operations and maintenance billing—such as BIA billing dead landowners and BIA overbilling living landowners—led the tribe and its members to appeal BIA’s billing of operations and maintenance fees. Resolution of these appeals is still pending within the agency. BIA will not collect on these bills or issue subsequent bills until the matters raised in the appeal are resolved. The Wind River Irrigation Project was authorized for construction in 1905, but construction was never completed. It is one of BIA’s 16 revenue- generating irrigation projects with 38,300 acres being assessed operations and maintenance fees (and 51,000 acres authorized for irrigation). The project is located in Fort Washakie, Wyoming on the Wind River Reservation, home of the Arapaho Tribe of the Wind River Reservation and the Shoshone Tribe of the Wind River Reservation. About 67 percent of the project land is owned by either the tribe or individual tribal members, and about 33 percent is owned by individual non-Indians. BIA currently estimates the project’s total deferred maintenance costs to be $84,956,546. See figure 16 for pictures of the Wind River Irrigation Project. Weeds and tree roots impair water flow and lead to seepage. Cattle-crossings erode canal banks and impair water flow. Deteriorating irrigation infrastructure impairs water delivery. Additional water storage and improved efficiency needed to meet demand for water. Deferring maintenance undermines long-term sustainability of project. BIA financial management may limit ability of project staff to conduct needed maintenance in short maintenance season. BIA relies on “crisis-style” management and “band-aid” solutions rather than a long-term plan to manage project. Poor communication between BIA and water users. Water users are not involved enough in project decisionmaking. Supervision of project staff is insufficient and BIA is not accountable to water users. Turnover of BIA staff is problematic. Some water users want to manage all or part of the project. In addition to those individuals named above, Jeffery D. Malcolm, Assistant Director, Tama R. Weinberg, Rebecca A. Sandulli, and David A. Noguera made key contributions to this report. Also contributing to the report were Richard P. Johnson, Nancy L. Crothers, Stanley J. Kostyla, Kim M. Raheb, and Jena Y. Sinkfield. | The Department of the Interior's Bureau of Indian Affairs (BIA) manages 16 irrigation projects on Indian reservations in the western United States. These projects, which were generally constructed in the late 1800s and early 1900s, include water storage facilities and delivery structures for agricultural purposes. Serious concerns have arisen about their maintenance and management. GAO was asked to examine (1) BIA's estimated deferred maintenance cost for its 16 irrigation projects, (2) what shortcomings, if any, exist in BIA's current management of its irrigation projects, and (3) any issues that need to be addressed to determine the long-term direction of BIA's irrigation program. BIA estimated the cost for deferred maintenance at its 16 irrigation projects at about $850 million for 2005, although the agency is in the midst of refining this estimate. BIA acknowledges that this estimate is a work in progress, in part, because some projects incorrectly counted new construction items as deferred maintenance. To further refine its estimate, BIA plans to hire engineering and irrigation experts to conduct thorough condition assessments of all 16 irrigation projects to correctly identify deferred maintenance needs and costs. BIA's management of some of its irrigation projects has serious shortcomings that undermine effective decisionmaking about project operations and maintenance. First, under BIA's organizational structure, officials with the authority to oversee irrigation project managers generally lack the technical expertise needed to do so effectively, while the staff that have the expertise lack the necessary authority. Second, despite federal regulations that require BIA to consult with project stakeholders in setting project priorities, BIA has not consistently provided project stakeholders with the necessary information or opportunities to participate in project decisionmaking. The long-term direction of BIA's irrigation program depends on the resolution of several larger issues. Of most importance, BIA does not know to what extent its irrigation projects are capable of financially sustaining themselves, which hinders its ability to address long-standing concerns regarding inadequate funding. Information on financial sustainability, along with accurate deferred maintenance information, are two critical pieces of information that are needed to have a debate on the long-term direction of BIA's irrigation program. Once this information is available, the Congress and interested parties will be able to address how the deferred maintenance will be funded and whether entities other than BIA could more appropriately manage some or all of the projects. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Depository institutions—banks, thrifts, and credit unions—have attained a unique and central role in U.S. financial markets through their deposit- taking, lending, and other activities. Individuals have traditionally placed a substantial amount of their savings in federally insured depository institutions. In addition, the ability to accept deposits transferable by checks and other means has allowed depository institutions to become principal agents or middlemen in many financial transactions and in the nation’s payment system. Depository institutions typically offer a variety of savings and checking accounts, such as ordinary savings, certificates of deposits, interest-bearing checking, and noninterest-bearing checking accounts. Also, the same institutions may offer credit cards, home equity lines of credit, real estate mortgage loans, mutual funds, and other financial products. In the United States, regulation of depository institutions depends on the type of charter the institution chooses. The various types of charters can be obtained at the state or national level and cover: (1) commercial banks, which originally focused on the banking needs of businesses but over time broadened their services; (2) thrifts, which include savings banks, savings associations, and savings and loans and which were originally created to serve the needs—particularly the mortgage needs—of those not served by commercial banks; and (3) credit unions, which are member-owned cooperatives run by member-elected boards with a historic emphasis on serving people of modest means. All depository institutions have a primary federal regulator if their deposits are federally insured. State regulators participate in the regulation of institutions with state charters. Specifically, the five federal banking regulators charter and oversee the following types of depository institutions: OCC charters and supervises national banks. As of December 30, 2006, there were 1,715 commercial banks with national bank charters. These banks held the dominant share of bank assets, about $6.8 trillion. The Federal Reserve serves as the regulator for state-chartered banks that opt to be members of the Federal Reserve System and the primary federal regulator of bank holding companies, including financial holding companies. As of December 30, 2006, the Federal Reserve supervised 902 state member banks with total assets of $1.4 trillion. FDIC supervises all other state-chartered commercial banks with federally insured deposits, as well as federally insured state savings banks. As of December 30, 2006, there were 4,785 state-chartered banks and 435 state-chartered savings banks with $1.8 trillion and $306 billion in total assets, respectively. In addition, FDIC has backup examination authority for federally insured banks and savings institutions of which it is not the primary regulator. OTS charters and supervises federally chartered savings associations and serves as the primary federal regulator for state-chartered savings associations and their holding companies. As of December 30, 2006, OTS supervised 761 federally chartered and 84 state chartered thrifts with combined assets of $1.4 trillion. NCUA charters, supervises, and insures federally chartered credit unions and is the primary federal regulator for federally insured state chartered credit unions. As of December 30, 2006, NCUA supervised 5,189 federally chartered and insured 3,173 state chartered credit unions with combined assets of $710 billion. These federal regulators conduct on-site examinations and off-site monitoring to assess institutions’ financial condition and compliance with federal banking and consumer laws. Additionally, as part of their oversight the regulators issue regulations, take enforcement actions, and close failed institutions. Regulation DD, which implements TISA, became effective with mandatory compliance in June 1993. The purpose of the act and its implementing regulations is to enable consumers to make informed decisions about their accounts at depository institutions through the use of uniform disclosure documents. These disclosure documents are intended to help consumers “comparison shop” by providing information about fees, annual percentage yields, interest rates, and other terms for deposit accounts. The regulation is supplemented by “staff commentary,” which contains official Federal Reserve staff interpretations of Regulation DD. Since the initial implementation date for Regulation DD, several amendments have been made to the regulation and the corresponding staff commentary. For example, the Federal Reserve made changes to Regulation DD, effective July 1, 2006, to address concerns about the uniformity and adequacy of information provided to consumers when they overdraw their deposit accounts. Credit unions are governed by a substantially similar regulation issued by NCUA. Regulation E, which implements the Electronic Fund Transfer Act, became effective in May 1980. The primary objective of the act and Regulation E is the protection of individual consumers engaging in electronic funds transfers (EFT). Regulation E provides a basic framework that establishes the rights, liabilities, and responsibilities of participants in electronic fund transfer systems such as ATM transfers, telephone bill-payment services, point-of-sale terminal transfers in stores, and preauthorized transfers from or to consumer's bank accounts (such as direct deposit and Social Security payments). The term “electronic fund transfer” generally refers to a transaction initiated through an electronic terminal, telephone, computer, or magnetic tape that instructs a financial institution either to credit or to debit a consumer's asset account. Regulation E requires financial institutions to provide consumers with initial disclosures of the terms and conditions of EFT services. The regulation allows financial institutions to combine the disclosure information required by the regulation with that required by other laws such as TISA as long as the information is clear and understandable and is available in a written form that consumers can keep. Paying or honoring customers’ occasional or inadvertent overdrafts of their demand deposit accounts has long been an established practice at depository institutions. As shown in figure 1, depository institutions have four options when a customer attempts to withdraw or access funds from an account that does not have enough money in it to cover the transaction, and fees can be assessed for each of these options. The institution can (1) cover the amount of the overdraft by tapping a linked account (savings, money market, or credit card) established by the customer; (2) charge the overdraft to a linked line of credit; (3) approve the transaction (if electronic) or honor the customer’s check by providing an ad hoc or “courtesy” overdraft; or (4) deny the transaction or decline to honor the customer’s check. The first two options require that customers have created and linked to the primary checking account one or more other accounts or a line of credit in order to avoid overdrafts. The depository institution typically waives fees or may charge a small fee for transferring money into the primary account (a transfer fee). Depository institutions typically charge the same amount for a courtesy overdraft (an overdraft fee) as they do for denying a transaction for insufficient funds (an insufficient funds fee). In addition to fees associated with insufficient funds transactions, institutions may charge a number of other fees for checking and savings account services and transactions. As shown in table 1, these fees include periodic service charges associated with these accounts and special service fees assessed on a per-transaction basis. Our analysis of data from private vendors showed that a number of bank fees—notably charges for insufficient funds and overdraft transactions— have generally increased since 2000, while others have decreased. In general, banks and thrifts charged higher fees than credit unions for checking and savings account services, and larger institutions charged more than smaller institutions. During this same period, the portion of depository institutions revenues derived from noninterest sources— including, but not limited to, fees on savings and checking accounts— increased somewhat. Changes in both consumer behavior and practices of depository institutions are likely influencing trends in fees, but limited data exist to demonstrate the effect of specific factors. FDIC is currently conducting a special study of the overdraft programs that should provide important insights on how these programs operate, as well as information on characteristics of customers who pay overdraft bank fees. Data we obtained from vendors—based on annual surveys of hundreds of banks, thrifts, and credit unions on selected banking fees indicated that some checking and savings account fee amounts generally increased between 2000 and 2007, while a few fell, notably monthly maintenance fees. For example, as shown in figure 2, average insufficient funds and overdraft fees have increased by about 11 percent, stop payment order fees by 17 percent, and return deposited item fees by 49 percent since 2000. Across all institutions, average insufficient funds and overdraft fees were the highest dollar amounts, on average, of the fees reported. For example, the average insufficient funds fee among the institutions surveyed by Moebs $ervices in 2006 was $24.02, while among the institutions surveyed by Informa Research Services it was $26.07. Data from Informa Research Services also indicated that since 2004 a small number of institutions (mainly large banks) have been applying tiered fees to certain transactions, such as overdrafts. For example, an institution may charge one amount for the first three overdrafts in a year (tier 1), a higher rate for overdrafts four to six of that year (tier 2), and an even higher rate for overdrafts seven and beyond in a single year (tier 3). Of the institutions that applied tiered fees in 2006, the average overdraft fees were $26.74, $32.53, and $34.74 for tiers 1, 2, and 3, respectively. The data from these vendors also indicate that fee amounts for some transactions or services varied or generally declined during this period. For example: The average ATM surcharge fee (assessed by a depository institution when its ATM is used by a nonaccount holder) among institutions surveyed by Moebs $ervices was $0.95 in 2000, rising to $1.41 in 2003, and declining to $1.34 in 2006. This variability was also evident in the fees charged by institutions surveyed by Informa Research Services. The average foreign ATM fee (assessed by a depository institution when its account holders use another institution’s ATM) generally declined, from $0.92 in 2000 to $0.61 in 2006 among institutions surveyed by Moebs $ervices and from $1.83 to $1.14 over the same period among institutions surveyed by Informa Research Services. The average monthly maintenance fees on standard noninterest bearing checking accounts decreased from $6.81 in 2000 to $5.41 in 2006 among institutions surveyed by Informa Research Services (Moebs $ervices did not provide data on this fee). Additionally, an increasing number of the surveyed institutions offered free checking accounts (with a minimum balance required to open the account) over this period. For example, in 2001 almost 30 percent of the institutions offered free checking accounts, while in 2006 the number grew to about 60 percent of institutions. Finally, some fees declined in amount, as well as in terms of their prevalence. For example, Moebs $ervices reported that the institutions it surveyed charged annual ATM fees, generally for issuing a card to customers for their use strictly at ATMs, ranging from an average of $1.37 in 2000 to $1.14 in 2003. However, Moebs $ervices stopped collecting data on this fee because, according to a Moeb’s official, fewer and fewer institutions reported charging the fee. Similarly, Moebs $ervices reported that the institutions it surveyed charged an annual debit card fee, generally for issuing a card to customers for their use at ATMs, averaging from $0.94 in 2000 to $1.00 in 2003; but, it stopped collecting this data as well. (Informa Research Services reported data on these fees through 2006, when they averaged $0.44 and $0.74, respectively.) Appendix III contains further details on the data reported by Moebs $ervices and Informa Research Services, in both nominal and real dollars. A number of factors may explain why some fees increased while others decreased. For example, greater use of automation and lower cost of technology may explain why certain ATM fees have decreased or been eliminated altogether. Additionally, competition among depository institutions for customers likely has contributed to the decrease in monthly maintenance fees and the increased prevalence of “free checking” accounts. Factors that may be influencing trends in fees overall are discussed subsequently in this report. Using data supplied by the two vendors, we compared the fees for checking and savings accounts by type of institution and found that, on average, banks and thrifts charged more than credit unions for almost all of them (the exception was the fee for returns of deposited items). For example, banks and thrifts charged on average roughly three dollars more than credit unions for insufficient funds and overdraft fees throughout the period. However, on average credit unions charged almost $6.00 more than banks and thrifts on returns of deposited items. The amounts institutions charged for certain transactions also varied by the institution’s size, as measured by assets. Large institutions—those with more than $1 billion in assets—on average charged more for the majority of fees than midsized or small institutions—those with assets of $100 million to $1 billion and less than $100 million, respectively. Large institutions on average charged between $4.00 and $5.00 more for insufficient funds and overdraft fees than smaller institutions. Further, on average, large banks and thrifts consistently charged the highest insufficient funds and overdraft fees, while small credit unions consistently charged the lowest. Specifically, in 2007 large banks and thrifts charged an average fee of about $28.00 for insufficient funds and overdraft fees, while small credit unions charged an average fee of around $22.00. While large institutions in general had higher fees than other sized institutions, smaller institutions charged considerably more for returns of deposited items. The results of our analysis are consistent with the Federal Reserve’s 2003 report on bank fees, which showed that large institutions charged more than medium- and small-sized institutions (banks and thrifts combined) for most fees. Our analysis of Informa Research Services data also showed that, controlling for both institution type and size, institutions in some regions of the country, on average, charged more for some fees, such as insufficient funds and overdraft fees, than others. For example, in 2006 the average overdraft fee in the southern region was $28.18, compared with a national average of $26.74 and a western region average of $24.94. Between 2000 and 2006, the portion of depository institutions’ income from noninterest sources, including income generated from bank fees, varied but generally increased. As shown in figure 3, banks’ and thrifts’ noninterest income rose from 24 to 27 percent of total income between 2000 and 2006 (peaking at 33 percent in 2004) and credit unions’ noninterest income rose from 11 to 14 percent (peaking at 20 percent in 2004). The percent of noninterest income appeared to have an inverse relationship to changes in the federal funds rate—the interest rate at which depository institutions lend balances at the Federal Reserve to other depository institutions— which is an indicator of interest rate changes during the period. Low interest rates combined with increased competition from other lenders can make it difficult for banking institutions to generate revenues from interest rate “spreads,” or differences between the interest rates that can be charged for loans and the rates paid to depositors and other sources of funds. However, noninterest income includes revenue derived from a number of fee-based banking services, not all of them associated with checking and savings accounts. For example, fees from credit cards, as well as fees from mutual funds sales commissions, are included in noninterest income. Thus, noninterest income cannot be used to specifically identify either the extent of fee revenue being generated, or the portion that is attributable to any specific fee. Among other financial information, banks and thrifts are required to report data on service charges on deposit accounts (SCDA), which includes most of the fees associated with checking and deposit accounts. Specifically, SCDA includes, among other things, account maintenance fees, charges for failing to maintain a minimum balance, some ATM fees, insufficient funds fees, and charges for stop payment orders. As figure 4 shows, banks’ and thrifts’ SCDA, and to a somewhat greater extent credit union’s fee income as a percentage of total income, increased overall during the period, with a slight decline in recent years. However, it should be noted that credit union fee income includes income generated from both deposit accounts and other products that credit unions offer, such as fees for credit cards and noncustomer use of proprietary ATMs; thus, the percentage of fee income they report is not directly comparable to the service charges reported by banks and thrifts. Because institutions do not have to report SCDA by line item, it is difficult to estimate the extent to which specific fees on checking and deposit accounts contributed to institutions’ revenues or how these contributions have changed over the years. Further, some fees that banking customers incur may not be covered by SCDA. For example, institutions report monthly account maintenance fee income as SCDA, but not income earned from fees charged to a noncustomer, such as fees for the use of its proprietary ATMs. Similarly, credit unions’ reported fee income cannot be used to identify fee revenues from specific checking and savings account fees. Since the mid-1990s, consumers have increasingly used electronic forms of payment such as debit cards for many transactions, from retail purchases to bill payment. By 2006 more than two-thirds of all U.S. noncash payments were made by electronic payments (including credit cards, debit cards, automated clearing house, and electronic benefit transfers), while the number of paper payments (e.g., checks) has decreased due to the rapid growth in the use of debit cards. Generally, these electronic payments are processed more quickly than traditional paper checks. For example, debit card transactions result in funds leaving customer’s checking accounts during or shortly after the transaction, as opposed to checks, which may not be debited from a customer’s account for a few days (although depository institutions have also begun to process checks faster, in part, as a result of the Check Clearing for the 21st Century Act (Check 21 Act) and implementing regulations, which became effective in late 2004). Despite this overall shortening of time or “float” between the payment transaction and the debiting of funds from a consumer’s account, depository institutions can hold certain nonlocal checks deposited by a consumer for up to 11 days. According to consumer groups and bank representatives, this creates the potential for increased incidences of overdrafts if funds are debited from a consumers account faster than deposits are made available for withdrawal. The shift in consumer payment preferences has occurred rather quickly, and we identified little research on the extent to which the increased use of electronic payments, such as debit cards, has affected the prevalence of specific deposit account fees, such as overdraft or insufficient fund fees. Additionally, some institutions have internal policies for posting deposits to and withdrawals from customer accounts that can affect the incidence of fees. For example, consumer group representatives, bank representatives, and federal regulatory officials told us that many institutions process the largest (highest dollar amount) debit transaction before the smallest one regardless of the order in which the customers initiated the transactions. This practice can affect the number of overdraft fees charged to a customer. For example, if a customer had only $600 available in their account, processing a payment for $590 first before three transactions of $25 each would result in three instances of overdrafts, whereas reversing the order of processing payments from smallest to largest would result in one instance of overdraft. Banking officials said that this processing of largest to smallest transactions first ensures that consumers’ larger, and presumably more important payments, such as mortgage payments, are made. One of the federal banking regulators—OTS—issued guidance in 2005 stating that institutions it regulates should not manipulate transaction clearing steps (including check clearing and batch debit processing) to inflate fees. We were unable to identify comprehensive information regarding the extent to which institutions were using this or other methods (chronological, smallest-to-largest, etc.) of processing payments. Further, some depository institutions have automated the process used to approve overdrafts and have increasingly marketed the availability of overdraft protection programs to their customers. Historically, depository institutions have used their discretion to pay overdrafts for consumers, usually imposing a fee. Over the years, to reduce the costs of reviewing individual items, some institutions have established policies and automated the process for deciding whether to honor overdrafts, but generally institutions are not required to inform customers about internal policies for determining whether an item will be honored or denied. In addition, third- party vendors have developed and sold automated programs to institutions, particularly to smaller institutions, to handle overdrafts. According to the Federal Reserve, what distinguishes the vendor programs from in-house automated processes is the addition of marketing plans that appear designed to (1) promote the generation of fee income by disclosing to account holders the dollar amount that the consumer typically will be allowed to overdraw their account and (2) encourage consumers to use the service to meet short-term borrowing needs. An FDIC official noted that some vendor contracts tied the vendor’s compensation to an increase in the depository institution’s fee revenues. We were unable to identify information on the extent to which institutions were using automated overdraft programs developed and sold by third- party vendors or the criteria that these programs used. Representatives from a few large depository institutions told us that they are using software programs developed in-house to determine which account holders would have overdrafts approved. According to consumer groups and federal banking regulators, software vendors appear to be primarily marketing automated overdraft programs to small and midsized institutions. The 2005 interagency guidance on overdraft protection programs encouraged depository institutions to disclose to consumers how transactions would be processed and how fees would be assessed. An FDIC official noted that, while no empirical data are available, institutions’ advertising of overdraft protection programs appears to have diminished since publication of the interagency guidance. Because fees for overdrafts and instances of insufficient funds may be more likely to occur in accounts with lower balances, there is some concern that they may be more likely among consumers who traditionally have the least financial means, such as young adults and low- and moderate-income households. We were not able to analyze the demographic characteristics of customers that incur bank fees because doing so would require transaction-level data for all account holders—data that are not publicly available. We identified only two studies—one by an academic researcher and one by a consumer group—that discussed the characteristics of consumers who pay bank fees. Neither study obtained a sample of customers who overdraw that was representative of the U.S. population. According to the academic researcher’s study, which used transaction level account data for one small Midwest bank, overdrafts were not significantly correlated with consumers’ income levels, although younger consumers were more likely to have overdrafts than consumers of other ages. However, the results of this study cannot be generalized to the larger population because the small institution used was not statistically representative of all depository institutions. The consumer group study, which relied on a survey in which individuals with bank accounts were interviewed, found that those bank customers who had had two or more overdrafts in the 6 months before the date of the interview were more often low income, single, and nonwhite. However, this study also had limitations, including the inherent difficulty in contacting and obtaining cooperation from a representative sample of U.S. households with a telephone survey and because it relied on consumers’ recall of and willingness to accurately report past events rather than on actual reviews of their transactions. While we cannot fully assess the quality of results from these two studies, we note them here to illustrate the lack of definitive research in this area. Partly in response to consumer concerns raised by overdraft protection products, FDIC is currently conducting a two-part study on overdraft protection products offered by the institutions it supervises. The results of this study may provide information on the types of consumers who pay bank fees. For both parts, FDIC is collecting data that are not currently available in the call reports or other standard regulatory reports. During the first phase of its study, FDIC collected data from 500 state-chartered nonmember banks about their overdraft products and policies. Data from the first phase will reveal how many FDIC-regulated banks offer overdraft protection programs and the details of these programs, such as how many of them are automated. FDIC expects to complete the data collection effort at the end of 2007. The second phase involves collecting transaction-level data on the depositors who use the overdraft products for 100 of the 500 institutions for a year. As part of this phase, FDIC plans to use income information by U.S. Census Bureau tract data as a proxy for account holder’s income to try and determine the characteristics of consumers who incur overdraft fees. FDIC expects to complete the analysis at the end of 2008. Federal regulators assess depository institution’s compliance with the disclosure requirements of Regulations DD and E during examinations by reviewing an institution’s written policies and procedures, including a sample of disclosure documents. In general, regulators do not review the reasonableness of such fees unless there are safety and soundness concerns. Since 2005, NCUA has included examination procedures specifically addressing institutions’ adherence to the 2005 interagency guidance concerning overdraft protection products and, in September 2007, all of the regulators revised their Regulation DD examination procedures to include reviews of the disclosures associated with such products offered by institutions that advertise them. In general, examinations are risk-based—that is, targeted to address factors that pose risks to the institution—and to help focus their examinations of individual institutions, the regulators review consumer complaints. Our analysis of complaint data from each of the federal regulators showed that while they receive a large number of checking account complaints, a small percentage of these complaints concerned the fees and disclosures associated with either checking or savings accounts. The federal regulators reported identifying a number of violations of the disclosure sections of Regulations DD and E during their examinations but collectively identified only two related formal enforcement actions from 2002 through 2006. Finally, officials from the six state regulators told us that, while they may look at compliance with Regulations DD and E, their primary focus is on safety and soundness issues and compliance with state laws and regulations, and they reported receiving few consumer complaints associated with checking and savings account fees and disclosure issues. Our review of the examination handbooks and examination reports indicated that the five federal regulators used similar procedures to assess compliance with Regulations DD and E (as discussed below, NCUA also includes steps to assess credit unions’ adherence to the 2005 interagency guidance on overdraft protection products, but that is distinct from assessing compliance with regulatory requirements). In general, the Regulation DD and E compliance examination procedures for each of the five federal banking regulators called for examiners to verify that the institution had policies or procedures in place to ensure compliance with all provisions of the regulations; review a sample of account disclosure documents and notices required by the regulation to determine whether contents were accurate and complete; and review a sample of the institution’s advertisements to (1) determine if the advertisements were misleading, inaccurate, or misrepresented the deposit contract and (2) ensure that the advertisements included all required disclosures. Federal regulators’ examination procedures for Regulations DD and E do not require examiners to evaluate the reasonableness of fees associated with checking and savings accounts. According to the Federal Reserve, the statutes administered by the regulators do not specifically address the reasonableness of fees assessed. Additionally, officials of the federal regulators explained that there were no objective industry-wide standards to assess the “reasonableness” of fees. OCC officials told us that an industry-wide standard would not work because, among other things, fees vary among banks that operate in different geographical areas and that competitive conditions in local markets determine fees. According to the federal regulatory officials, each depository institution is responsible for setting the fee for a particular product and service, and regulators look at rates or pricing issues only if there is a safety and soundness concern. For example, NCUA officials told us that an examiner’s finding that fee income was excessive could create safety and soundness issues, depending on the way the fees were generated and how the resulting revenues were spent. The regulators stated that while they did not evaluate the reasonableness of fees, the disclosure requirements of Regulations DD and E were intended to provide consumers with information that allow them to compare fees across institutions. Additionally, they told us that market forces should inhibit excessive fees since the financial institution would likely lose business if it decided to charge a fee that was significantly higher than its competitors. On September 13, 2007, the Federal Financial Institutions Examination Council’s Task Force on Consumer Compliance—a formal interagency body composed of representatives of the Federal Reserve, FDIC, NCUA, OCC, and OTS—approved revised interagency compliance examination procedures for Regulation DD. Officials of each of the federal regulators told us that their agencies either had begun or were in the process of implementing the updated examination procedures. Among other changes, the revised examination procedures address the Regulation DD disclosure requirements for institutions that advertise the payment of overdrafts. Specifically, the revised examination procedures ask the examiners to determine whether the institution clearly and conspicuously discloses in its advertisements (1) the fee for the payment of each overdraft, (2) the categories of transactions for which a fee may be imposed for paying an overdraft, (3) the time period by which a consumer must repay or cover any overdraft, and (4) the circumstances under which the institution will not pay an overdraft. These items are among those that were identified as “best practices” by the 2005 interagency guidance. According to the guidance, clear disclosures and explanations to consumers about the operation, costs, and limitations of an overdraft protection program are fundamental to using such protection responsibly. Furthermore, the guidance states that clear disclosures and appropriate management oversight can minimize potential customer confusion and complaints, as well as foster good customer relations. The interagency guidance identifies best practices currently observed in or recommended by the industry on marketing, communications with consumers, and program features and operations. For example, the best practices include marketing the program in a way that does not encourage routine overdrafts, clearly explaining the discretionary nature of the program, and providing the opportunity for consumers to opt out of the program. Prior to the revised Regulation DD examination procedures, NCUA had adopted procedures to assess the extent to which institutions it examines followed the interagency guidance. In December 2005, NCUA adopted “bounce protection” (that is, overdraft protection) examination procedures as part of the agency’s risk-focused examination program. The examination procedures were developed to coincide with the issuance of the 2005 interagency guidance on overdraft protection programs, according to an NCUA official. In an NCUA letter to credit unions, the agency stated that “credit unions should be aware the best practices are minimum expectations for the operation of bounce protection programs.” NCUA’s examination procedures included a review of several key best practices. For example, the examination procedures assess whether credit unions provided customers with the opportunity to elect overdraft protection services or, if enrollment in such a program was automatic, to opt out. In addition to other areas of review, the examination procedures include a review of whether the credit union distinguished overdraft protection from “free” account features, and if the credit union clearly disclosed the fees of its overdraft protection program. To a more limited extent, OTS had overdraft protection examination procedures in place that address its guidance, but these were limited to a review of compliance-related employee training and the materials used to market or educate customers about the institution’s overdraft protection programs. Officials from the Federal Reserve, OCC, and FDIC reported that, beyond the recent revisions to Regulation DD examination procedures, their agencies did not have specific examination procedures related to the 2005 interagency guidance because the best practices are not enforceable by law. These officials told us that, while not following a best practice from the interagency guidance did not constitute a violation of related laws or regulations, they encourage institutions to follow the best practices. An FDIC official noted that a deviation from the guidance could serve as a “red flag” for an examiner to look more closely for potential violations. Officials of the federal banking regulators explained that examiners use complaint data to help focus examinations that they are planning or to alter examinations already in progress. For example, according to one regulator, if consumers file complaints because they have not received a disclosure document prior to opening an account, this could signify a violation of Regulation DD, which the examiners would review as part of the examination for this regulation. The officials noted that consumer complaints could be filed and were often resolved at the financial institution involved, in which case the consumer would not be likely to contact a federal banking regulator. However, if the consumer is not satisfied with the financial institution's response, a consumer would then likely file a complaint with the federal banking regulator. Consumers may also file a complaint directly with federal regulators without contacting the financial institution about a problem. In either case, regulators are required to monitor the situation until the complaint is resolved. According to the regulators’ complaint data, most of the complaints received from 2002 to 2006 involved credit cards, although a significant number of complaints were related to checking accounts and a somewhat smaller number involved savings accounts (fig. 5). In analyzing complaints specifically about checking and savings accounts from 2002 through 2006, we found that, on average, about 10 percent were related to fees, and 3 percent were related to disclosures. (For information on how the Federal Reserve, FDIC, OCC, and OTS resolved complaints, see app. IV.) Collectively fee and disclosure complaints represented less than 5 percent of all complaints received during this period. Officials of the banking regulators told us that the overwhelming bulk of complaints they received on checking and saving accounts concerned a variety of other issues, including problems opening or closing an account, false advertising, and discrimination. Among the regulators, OCC included in its complaint data the specific part of the regulation that was the subject of the complaint. Of the consumer complaints about fees that OCC received from 2002 through 2006, 39 percent were for “unfair” fees (concerning the conditions under which fees were applied), 2 percent were for new fees, 6 percent were for “high” fees (the amount of the fees), and 53 percent concerned fees in general. The majority of disclosure-related complaints that OCC received during this period were for the Regulation DD provision that, in part, requires that depository institutions provide account disclosures to a consumer before an account is opened or a service is provided, whichever is earlier, or upon request. OCC’s analysis of these complaints serves to identify potential problems—at a particular bank or in a particular segment of the industry— that may warrant further investigation by examination teams, supervisory guidance to address emerging problems, or enforcement action. The federal banking regulators’ examination data for the most recent 5 calendar years (2002 through 2006) showed a total of 1,674 instances in which the regulators cited depository institutions for noncompliance with the fee-related disclosure requirements of Regulations DD (1,206 cases) or E (468 cases). On average, this is about 335 instances annually among the nearly 17,000 depository institutions that these regulators oversee. As shown in table 2, most of the disclosure-related violations were reported by FDIC—83 percent of the Regulation DD disclosure-related violations (998 of 1,206) and 74 percent of the Regulation E disclosure-related violations (348 of 468). According to FDIC officials, one reason for the larger number of fee-related violations identified by FDIC is the large number of institutions for which it is the primary federal regulator (5,220 depository institutions as of December 31, 2006). Also, differences among the regulators may appear due to the fact that they do not count the numbers of violations in exactly the same way. According to our analysis of the regulators’ data, the most frequent violation associated with the initial disclosure requirements of Regulation DD was noncompliance with the requirement that disclosure documents be written in a clear and conspicuous manner, in a form that customers can keep, and reflect the terms of the legal obligation of the account agreement between the consumer and the depository institution (1,053 cases). Examiners reported violations of two other disclosure provisions of Regulation DD. First, they found violations of the requirement that depository institutions provide account disclosure documents to a consumer before an account is opened or a service is provided, whichever is earlier, or upon request (124 cases). Second, they reported violations of the requirement that disclosure documents state the amount of any fee that may be imposed in connection with the account or an explanation of how the fee will be determined and the conditions under which it may be imposed (29 cases). The most frequent violation associated with the initial disclosure requirements of Regulation E was of the requirement that financial institutions make the disclosure documents available at the time a consumer contracts for an EFT or before the first EFT is made involving the consumer’s account (321 cases). Other disclosure provisions from Regulation E for which examiners cited violations included those that required disclosure statements to be in writing, clear and readily understandable, and in a form that customers can keep (5 cases) and to list any fees imposed by the financial institution for EFTs or for the right to make transfers (142 cases). According to officials of the federal banking regulators, examiners are typically successful in getting the financial institutions to take corrective action on violations either during the course of the examination or shortly thereafter, negating the need to take formal enforcement action. FDIC, NCUA, OCC, and Federal Reserve officials reported that from 2002 to 2006 they had not taken any formal enforcement actions solely related to violations of the disclosure requirements from Regulations DD and E, while OTS reported taking two such actions during the period. Officials of all six of the state banking regulators that we contacted told us that the primary focus of their examinations is on safety and soundness issues and compliance with state laws and regulations. Officials of four of the six state banking regulators we contacted told us their examiners also assess compliance with Regulation DD, and three of these four indicated that they assess compliance with Regulation E as well. Representatives of the four state banking regulators also told us that if they identify a violation and no federal regulator is present, they cite the institution and forward this information to the appropriate federal banking regulator. The other two state banking regulators said that they review compliance with federal regulations, including Regulations DD and E, only if the federal banking regulators have identified noncompliance with federal regulations during the prior examination. Officials in four states said that their state laws and regulations contained additional fee and disclosure requirements beyond those contained in Regulations DD and E. For example, according to Massachusetts state banking officials, Massachusetts bank examiners review state-chartered institutions for compliance with a state requirement that caps the fees on returns of deposited items. In another example, an Illinois law restricts institutions from charging an ATM fee on debit transactions made with an electronic benefits card (a card that beneficiaries used to access federal or state benefits, such as food stamp payments), according to Illinois state banking officials. Additionally, these state officials told us that Illinois state law requires all state-chartered institutions to annually disclose their fee schedules for consumer deposit accounts. According to an official at the New York state banking department, their state has a number of statutes and regulations concerning bank fees and their disclosure to consumers and their state examiners review institutions’ compliance with these requirements. The laws and regulations cover, among other things, permissible fees, required disclosure documents, and maximum insufficient fund fees, according to the New York state officials. Two of the states reported that, in conducting examinations jointly with the federal regulators, they had found violations of the Regulation DD and E disclosure provisions from 2002 to 2006 (one state reported 1 violation of Regulation DD, and one state reported 16 violations of Regulation DD and 10 violations of Regulation E). Four of the states did not report any violations (in one case, the state agency reported that they did not collect data on violations). Three states also reported that they had not taken any formal enforcement actions against institutions for violations of Regulation DD or E disclosure provisions; two states reported that they did not collect data on enforcement actions for violations of these regulations; one state did not report any data to us on enforcement actions. Regarding consumer complaints, officials in two states said that they did not maintain complaint data concerning fees and disclosures associated with checking and savings accounts, and the other four states reported relatively few complaints associated with fees and disclosures. For example, Massachusetts reported a total of 89 complaints related to fees and disclosures during the period, in comparison to 4,022 total complaints over the period. The results of our requests for information on fees or account terms and conditions at depository institutions we visited, as well as our visits to institutions’ Web sites, suggest that consumers may find it difficult to obtain such information upon request prior to opening a checking or savings account. A number of factors could explain the difficulties we encountered in obtaining comprehensive information on fees and account terms and conditions, including branch staff potentially not being knowledgeable about federal disclosure requirements or their institution’s available disclosure documents. Further, federal banking regulators’ examination processes do not assess whether potential customers can easily obtain information that institutions are required to disclose. Potential customers unable to obtain such information upon request prior to opening an account will not be in a position to make meaningful comparisons among institutions, including the amounts of fees they may face or the conditions under which fees would be charged. As we have seen, TISA requires, among other things, that depository institutions provide consumers with clear and uniform disclosures of the fees that can be assessed against all deposit accounts, including checking and savings accounts, so that consumers may make a meaningful comparison between different institutions. Depository institutions must provide these disclosures to consumers before they open accounts or receive a service from the institution or upon a consumer’s request. Regulation DD and the accompanying staff commentary specify the types of information that should be contained in these disclosures, including minimum balance required to open an account; monthly maintenance fees and the balance required to avoid them; fees charged when a consumer opens or closes an account; fees related to deposits or withdrawals, such as charges for using the institution’s ATMs; and fees for special services—for example, insufficient funds or charges for overdrafts and stop payment order fees on checks that have been written but not cashed. Regulation DD also requires depository institutions to disclose generally the conditions under which a fee may be imposed—that is, account terms and conditions. For example, institutions must specify the categories of transactions for which an overdraft fee may be imposed but do not have to provide an exhaustive list of such transactions. While depository institutions are required to provide consumers with clear and uniform disclosures of fees to enable meaningful comparisons among institutions, consumers may consider other factors when shopping among institutions. For example, federal banking regulators and one consumer group told us that convenience factors, such as locations of branches or ATMs, are typically the factors that consumers consider the most besides costs, when choosing where to open a checking and savings account. Our visits to branches of depository institutions nationwide suggested that some consumers may be unable to obtain, upon request, meaningful information with which to compare an institution’s fees and how they are assessed before opening a checking or savings account. We also found that the institutions’ Web sites generally did not provide comprehensive information on fees or account terms and conditions. Further, the documents that we did obtain during our visits did not always describe some key features of the institutions’ internal policies and procedures that could affect the incidence or amount of overdraft fees assessed by the institution. To assess the ease or difficulty in obtaining a comprehensive list of fees and account terms and conditions associated with checking and savings accounts, GAO staff from 12 cities across the United States visited 185 branches of banks, thrifts, and credit unions. Collectively, these branches represented 154 different depository institutions. Posing as potential customers, we specifically requested a comprehensive list of fees and terms and conditions for checking and savings accounts that would allow us to compare such information across depository institutions. The results are summarized here. Comprehensive list of fees. We were unable to obtain a comprehensive list of fees for checking and savings accounts from 40 (22 percent) of the branches (representing 36 institutions). Instead, we obtained brochures describing only the features of different types of checking and savings accounts. Some of these brochures contained information on monthly maintenance fees and the minimum balance needed to avoid them. But these brochures did not contain information on other fees, such as overdraft or insufficient fund fees. While our success in obtaining a comprehensive list of fees varied slightly among institutions of different sizes, we did note greater variations among banks, credit unions, and thrifts. For example, we were unable to obtain a comprehensive list of fees at 18 percent of the 103 bank branches and 20 percent of the 46 credit union branches we visited (representing 14 banks and 9 credit unions, respectively), while among the 36 thrift branches visited (representing 13 thrift institutions) it was 36 percent. Account terms and conditions. We were unable to obtain the terms and conditions associated with checking and savings accounts from 61 of the 185 branches (representing 54 depository institutions) that we visited (33 percent). Instead, as described earlier, we were provided with brochures on the different types of checking and savings accounts offered by the institution. We also observed little differences in our ability to obtain account terms and conditions information from institutions of different sizes but again found differences by types of institutions. For example, we were unable to obtain this information at 32 percent of the small or midsized institutions (34 of 108), compared with 35 percent of the large institutions (27 of 77). With respect to the type of depository institution, we were unable to obtain these documents at 30 percent of the bank branches (31 of 103 branches, representing 25 banks), 35 percent of the credit union branches (16 of 46 branches, representing 16 credit unions), and 39 percent of the thrift branches (14 of 36 branches, representing 13 thrift institutions). For both the comprehensive list of fees and descriptions of account terms and conditions, we observed some differences among branches of a single depository institution. For example, we visited multiple branches of 23 depository institutions (that is, more than one branch of each of the 23). For four of these institutions, we were able to obtain all of the documents we requested from all of the branches. For the other 19 institutions, we encountered inconsistencies among the different branches in our ability to obtain the full set of information we requested. The results of our direct observations are generally consistent with those reported by the U.S. Public Interest Research Group (PIRG). In 2001, PIRG had its staff pose as consumers and visit banks to request fee brochures and reported that, in many cases, its staff members were unable to obtain this information despite repeated requests. Further, our results seem to be in accord with the violations data provided by the regulators; as noted previously, the most frequent violation of the fee-related disclosure provisions of Regulation DD cited by the regulators between 2002 and 2006 was noncompliance with the requirement that disclosure documents be written in a clear and conspicuous manner and in a form that customers can keep. While depository institutions are not required to have the comprehensive list of fees and account terms and conditions on Web sites if these sites are merely advertising and do not allow consumers to open an account online, we visited these Web sites as part of our effort to simulate a consumer trying to obtain information to compare checking and savings accounts across institutions. In visiting the Web sites of all the institutions that we visited in person, we were unable to obtain information on fees and account terms and conditions at more than half of them. For example, we were unable to obtain a comprehensive list of fees from 103 of the 202 Web sites (51 percent). In addition, we were unable to obtain the terms and conditions from 134 of the 202 (66 percent). Figure 6 compares the results of our visits to branches and Web sites of depository institutions. Some of the depository institutions’ Web sites nevertheless contained information on certain fees associated with checking and savings accounts. For example, most of the Web sites had information on monthly maintenance fees and ATM fees associated with checking accounts. Smaller percentages had information on fees for overdrafts and insufficient fund fees. For example, 87 percent provided information on monthly maintenance fees, 62 percent had information on ATM withdrawal fees, 41 percent contained information on overdraft fees, and 37 percent provided information on insufficient fund fees. Among branches at which we were unable to obtain a comprehensive list of fees, branch staff offered explanations suggesting that they may not be knowledgeable about federal disclosure requirements. As previously noted, depository institutions are required to provide consumers, upon request, with clear and uniform disclosures of the fees that can be assessed against checking and savings accounts so that consumers may make a meaningful comparison between different institutions. However, during our visits to branches of depository institutions, representatives at 14 branches we visited told us that we had all the information on fees we needed to comparison shop—even though we determined that the documents they provided did not include a comprehensive list of fees that consumers opening accounts there might have to pay, representatives at seven branches told us that no comprehensive fee schedules were available, and representatives at four branches told us that we had to provide personal information or open an account in order to obtain a comprehensive list of fees. In addition, we observed differences in our ability to obtain the comprehensive list of fees and account terms and conditions among branches of 19 of the 23 depository institutions we visited that had multiple branches. This variation among branches of the same institution suggests that staff knowledge of the institution’s available disclosure documents may have varied. Further, the examination procedures that federal banking regulators use to assess compliance with Regulation DD do not require examiners to verify whether new or potential customers are actually able to obtain the required disclosure documents before opening an account. (Rather, the examination procedures call for the examiner to review written policies and procedures and disclosure documents to ensure that they contain information required under the regulation.) As a result, examination results would not provide officials of depository institutions with information showing whether potential customers were experiencing difficulty obtaining information at particular branches. Because the results of our visits cannot be generalized to other institutions, and because the federal banking regulators do not assess the extent to which consumers are actually able to obtain disclosure documents, neither we nor the regulators know how widespread this problem may be, nor—to the extent that it does exist among institutions—the reasons for it. However, regardless of the cause, if consumers are unable to obtain key information upon request prior to opening an account, they will be unable to make meaningful distinctions regarding charges and terms of checking and savings accounts. The amounts of some fees associated with checking and savings accounts have grown over the past few years, while others have varied or declined. During the same time period, the portion of depository institutions’ incomes derived from noninterest sources, including fees, has varied somewhat but has risen overall. Changes in both consumer behavior, such as increased use of electronic forms of payment, and in the terms and conditions of accounts offered by depository institutions may be influencing these trends in fees, but available data do not permit determining their exact effects. Similarly, we could find little information on the characteristics of consumers who are most likely to incur fees. However, the general upward trend in fees puts a premium on the effective disclosure of account terms and conditions, including the amounts of individual fees and the conditions under which they will be assessed, to consumers who are shopping for savings and deposit accounts. While consumers may consider convenience or other factors, as well as costs, when choosing a depository institution, Regulation DD, as well as guidance issued by the federal banking regulators, is intended to ensure that consumers receive information needed to make meaningful comparisons among institutions regarding the savings and deposit accounts they offer. While the federal regulators take consumer complaints into account when determining the scope of their examinations of specific institutions, their examinations of compliance with Regulations DD and E consist of reviewing institutions’ written policies, procedures, and disclosure documents. On this basis, the regulators have cited numbers of institutions for violating the disclosure requirements. Further, the regulators are in the process of implementing revised examination procedures for Regulation DD compliance that will include assessing the extent to which depository institutions follow requirements governing the advertisement of overdraft protection programs. This will be particularly important given that fees associated with overdrafts were among the highest of the types of fees for which we obtained data. However, even under the revised procedures, the regulators’ examinations do not determine whether consumers actually receive required disclosure documents before opening an account. While the results of our visits to 185 branches of depository institutions cannot be generalized to all institutions, they raise some concern that consumers may find it difficult to obtain upon request, important disclosure documents prior to opening an account. We were unable to obtain detailed information about fees and account terms and conditions at over one-fifth of the branches we visited and, in many cases, we found inconsistencies among branches of the same depository institution. Because the federal banking regulators, in their compliance examinations, do not assess the extent to which consumers actually receive required disclosure documents before opening an account, they are not in a position to know how widespread this problem may be among the institutions they supervise, or the reasons for it. Incorporating into their oversight a means of assessing the extent to which consumers can actually obtain information to make meaningful comparisons among institutions, and taking any needed steps to assure the continued availability of such information, would further this goal of TISA. To help ensure that consumers can make meaningful comparisons between depository institutions—we recommend that the Chairman, Federal Deposit Insurance Corporation; Chairman, Board of Governors of the Federal Reserve System; Chairman, National Credit Union Administration; Comptroller of the Currency, Office of the Comptroller of the Currency; and Director, Office of Thrift Supervision assess the extent to which consumers receive specific disclosure documents on fees and account terms and conditions associated with demand and deposit accounts prior to opening an account, and incorporate steps as needed into their oversight of institutions’ compliance with TISA to assure that disclosures continue to be made available. We requested and received written comments on a draft of this report from FDIC, the Federal Reserve, NCUA, OCC, and OTS that are presented in appendixes V through IX. We also received technical comments from FDIC and the Federal Reserve, which we have incorporated in this report as appropriate. In their written responses, all five banking regulators indicated agreement with our report and stated that they will be taking action in response to our recommendation. For example, OCC stated that it would incorporate steps, as needed, into its oversight of institutions’ compliance with TISA to assure that disclosures continue to be made available. The Federal Reserve and NCUA specifically mentioned the need to revise, improve, or strengthen the current interagency Regulation DD examination procedures. All five agencies indicated that they plan to address this issue on an interagency basis. In addition, FDIC stated that it would provide further instructions to state nonmember banks about their ongoing responsibility to provide accurate disclosures to consumers upon request and would also provide further instructions to its examiners of the importance of this requirement; NCUA stated that it would send a letter to credit unions reiterating the disclosure requirements for fees and account terms; the Federal Reserve stated that it would expand its industry outreach activities to facilitate compliance and promote awareness of Regulation DD disclosure requirements. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Ranking Member, Subcommittee on Financial Institutions and Consumer Credit, Committee on Financial Services, House of Representatives, and other interested congressional committees and the heads of the Federal Reserve, FDIC, NCUA, OCC, and OTS. We also will make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix X. Our report objectives were to determine (1) the trends in the types and amounts of fees associated with checking and deposit accounts since 2000; (2) how federal and selected state banking regulators address checking and deposit account fees in their oversight of depository institutions; and (3) the extent to which consumers are able to obtain account terms and conditions and disclosures of fees, including information about specific transactions and bank practices that determine when such fees are assessed, upon request prior to opening an account. To provide information on the average amounts of various checking and savings account fees, we purchased data from two market research firms that specialize in the financial services industry; Moebs $ervices and Informa Research Services. Moebs $ervices provided us with an electronic file that contained data from 2000 to 2007 on the following fees: annual automated teller machine (ATM) fees, overdraft transfer fees from a line of credit, overdraft transfer fees from a deposit account, return deposited item fees, stop payment order fees and debit card annual fees. Moebs $ervices collected its data through telephone surveys with financial service personnel at each sampled institution. In the surveys, callers used a “mystery shopping” approach and requested rates and fees while posing as potential customers. The surveys were completed in June for each of the years we requested (the 2006 survey was conducted in December), and we obtained data from the following number of institutions (table 3): The statistical design of the survey was developed for Moebs $ervices by Professor George Easton of Emory University. The design consisted of a stratified random sample by (1) institution type (banks and thrifts combined, and credit unions), (2) institution size (as shown in table 4), and (3) regions of the country defined by metropolitan statistical area. We took the data we obtained from Moebs $ervices and computed average fees for institutions overall, as well as for institutions by type, size, and region. We interviewed Moebs $ervices representatives to understand their methodology for collecting the data and ensuring its integrity. In addition, we conducted reasonableness checks on the data we received and identified any missing, erroneous, or outlying data. We also worked with Moebs $ervices representatives to ensure our analysis of their data was correct. Finally, for the years 2000 through 2002, we compared the average fee amounts we calculated with averages the Board of Governors of the Federal Reserve System (Federal Reserve) had calculated using Moebs $ervices data for their “Annual Report to the Congress on Retail Fees and Services of Depository Institutions.” We found our averages to be comparable to those derived by the Federal Reserve and determined that the Moebs $ervice’s data were reliable for the purposes of this report. Informa Research Services also provided us with an electronic file that included summary level fee data from 2000 to 2006. The data included information for the same fees that Moebs $ervices had provided, but, also included the following fees: monthly fees for checking and savings account; insufficient funds and overdraft tiered fees; check enclosure and imaging fees; foreign ATM balance inquiry fees; and foreign ATM denied transaction fees. In addition to fee data, Informa Research Services also provided us with data on the minimum balances required to open an account, the monthly balances needed to waive fees, and the maximum number of overdrafts or insufficient funds fees that an institution would charge per day. Informa Research Services collected its data by gathering the proprietary fee statements of the financial institutions, as well as making anonymous in- branch, telephone, and Web site inquiries for a variety of bank fees. Informa Research Services also receives the information directly from its contacts at the financial institutions. The data are not statistically representative of the entire population of depository institutions in the country because the company collects fee data for particular institutions in specific geographical markets so that these institutions can compare their fees against their competitors. That is, surveyed institutions are self- selected into the sample, or are selected at the request of subscribers. To the extent that institutions selected in this manner differ from those which are not, results of the survey would not accurately reflect the industry as a whole. Informa Research Services collects data on over 1,500 institutions, including a mix of banks, thrifts, credit unions, and Internet-only banks. The institutions from which it collects data tend to be large institutions that have a large percentage of the deposits in a particular market. Additionally, the company has access to individuals and information from the 100 largest commercial banks. Table 5 shows the mix of institutions for which Informa Research Services collected fee type data from 2000–2006. The summary level data Informa Research Services provided us for each data element included the average amount, the standard deviation, the minimum and maximum values, and the number of institutions for which data were available to calculate the averages. Informa Research Services also provided this summary level data by the same categories of institution type and size as the Moebs $ervices data. In addition, Informa Research Services provided us with data for nine specific geographic areas: California, Eastern United States, Florida, Michigan, Midwestern United States, New York, Southern United States, Texas, and Western United States. We interviewed Informa Research Services representatives to gain an understanding of their methodology for collecting the data and the processes they had in place to ensure the integrity of the data. We also conducted reasonableness checks on the data and identified any missing, erroneous, or outlying data and worked with Informa Research Services representatives to correct any mistakes we found. As we did with the Moebs $ervices data, we compared the average fee amounts Informa Research Services had calculated for selected fees for 2000, 2001, and 2002 with the Federal Reserve’s “Annual Report to the Congress on Retail Fees and Services of Depository Institutions.” We found the averages to be comparable to those derived by the Federal Reserve and determined that the Informa Research Services data were sufficiently reliable for this report. To evaluate bank fee trends, for both the Moebs $ervices and Informa Research Services data, we adjusted the numbers for inflation to remove the effect of changes in prices. The inflation adjusted estimates used a base year of 2006 and Consumer Price Index calendar year values as the deflator. To determine the extent to which bank fees are contributing to depository institutions’ revenue, we obtained data from the quarterly financial information (call reports) filed by depository institutions and maintained by the Federal Deposit Insurance Corporation (FDIC). From this data, we analyzed interest income, noninterest income, and service charges on deposit accounts for commercial banks and thrifts from 2000 to 2006. We analyzed the data for all institutions, as well as by institution type (banks versus thrifts) and institution size (assets greater than $1 billion, assets between $100 million and $1 billion, and assets less than $100 million). Similarly, for credit unions, we reviewed the National Credit Union Administration’s (NCUA) “Financial Performance Reports,” which provided quarterly data for interest income, noninterest income, and fee income for all federally insured credit unions from 2000 to 2006. Based on past work, we have found the quarterly financial data maintained by FDIC and NCUA to be sufficiently reliable for the purposes of our reports. To determine the effect, if any, of changing consumer payment preferences and bank processing practices on the types and frequency of account fees incurred by consumers, we reviewed the 2004 and 2007 Federal Reserve payment studies on noncash payment trends in the United States. We also reviewed data on payment trends in debit and credit card transactions from the EFT Data Book. In addition, we spoke with multiple industry experts, including bank representatives and consumer group representatives, such as the Consumer Federation of America, the Center for Responsible Lending, and the U.S. Public Interest Research Group to understand what practices banks employ to process transactions on deposit accounts, how these practices have changed over the past few years, and the potential impact these practices have had on consumers incurring fees, such as overdraft fees. Furthermore, we reviewed studies that analyzed electronic payment preferences and identified one study that used transaction-level data to determine how payment preferences influence overdraft fees. To determine what data are available on the characteristics of consumers who pay bank fees, we reviewed two studies on the topic; one by an academic researcher and another by a consumer group. The academic study used transaction-level account data and regression models to estimate the probability of overdrawing an account. The data included customer information and all transactions with associated balances from May-August 2003, from one small Midwestern bank. The second study used data collected by telephone surveys of 3,310 adults, who were 18 years or older, between October 2005 and January 2006. Both studies suffer from limitations that preclude making inferences to the broader populations of banking customers who pay fees, but they represent the only relevant research at this point, and are suggestive of the characteristics of these customers. We also reviewed documentation on and interviewed officials at the FDIC about their ongoing study of overdraft protection programs, including the phase of their study in which they will review transaction-level data. Finally, we interviewed two academic researchers and representatives of eight consumer groups; five depository institutions; two software vendors; and four industry trade associations, including the American Bankers Association, Independent Community Bankers of America, America’s Community Bankers, and the Credit Union National Association, to determine what research had been done on the topic. To assess the extent that federal and selected state banking regulators review fees associated with checking and deposit accounts as part of their oversight of depository institutions, we obtained and reviewed examination manuals and guidance used by the five federal banking regulators—Federal Reserve, FDIC, NCUA, the Office of the Comptroller of the Currency (OCC), and the Office of Thrift Supervision (OTS)—and conducted interviews with agency officials. We also obtained and reviewed a sample of 25 compliance examination reports, on examinations completed during 2006, to identify how the federal regulators carried out examinations for compliance with Regulations DD and E. We selected five examination reports from each regulator based on an institution’s asset size and geographic dispersion, in an attempt to capture a variety of examinations. The asset size of the institutions ranged from $2 million to $1.2 trillion. In addition, we obtained information on the regulatory efforts of six states. We selected the states based on recommendations from the Conferences of State Banking Supervisors, New York State Banking Department, and Massachusetts Division of Banks and to achieve geographical dispersion. The selected states were: California, Connecticut, Illinois, Maine, Massachusetts, and New York. We reviewed compliance examination manuals and guidance used by the six state regulators and asked specific questions to each state’s appropriate banking officials. To determine the number of complaints that the regulators received on checking and savings accounts, in addition to complaints about fees and disclosures, we requested complaint data, including data on resolutions, for calendar years 2002 through 2006. For the complaint data, we obtained data on the banking products or services involved, the complaint category and, in some cases, the citation of the regulation. While our estimates of the proportions of complaints related to fees depend on how the banking regulators coded the subjects of the complaint they received, and how we combined those related to fees, we judge any possible variations to be slight. For the complaint resolution data, we obtained information about the resolution (outcomes) of complaints and the banking products or services involved. The data came from five different databases: (1) OCC’s REMEDY database, (2) the Federal Reserve’s Complaint Analysis Evaluation System and Reports (CAESAR), (3) FDIC’s Specialized Tracking and Reporting System (STARS), (4) OTS’ Consumer Complaint System (CCS), and (5) NCUA’s regionally based system on complaints. We obtained data from OCC, the Federal Reserve, FDIC, OTS, and NCUA that covered calendar years 2002 through 2006. For purposes of this report, we used data from the regulators’ consumer complaint databases to describe the number of complaints that each regulator received related to fees and disclosures for checking and savings accounts, as well as complaints received by four major product categories—checking accounts, savings accounts, mortgage loans, and credit cards. With respect to the data on complaint resolutions, we used the regulators’ data to describe the number of cases each regulator handled, what products consumers complained about, and how the regulators resolved the complaints. To assess the reliability of data from the five databases, we reviewed relevant documentation and interviewed agency officials. We also had the agencies produce the queries or data extracts they used to generate the data we requested. Also, we reviewed the related queries, data extracts, and the output for logical consistency. We determined these data to be sufficiently reliable for use in our report. Finally, we obtained data from each of the federal regulators on violations they cited against institutions for noncompliance with Regulation DD and Regulation E provisions. Specifically, we asked for data on the total number of violations that each regulator cited for all examined provisions of Regulations DD and E during 2002 to 2006, as well as for data on violations of selected disclosure provisions. The Regulation DD sections that we requested and obtained data on were: §§ 230.3, 230.4, 230.8, and 230.11. The Regulation E sections that we requested and obtained data on were: §§ 205.4 and 205.7. We compiled the data and summarized the total number of violations found for all of the federal regulators during 2002 to 2006. We also obtained data from 2002 through 2006 on the total number of enforcement actions that each regulator took against institutions for violations of all provisions of Regulations DD and E and the selected disclosure provisions. To assess the reliability of data from the five databases, we reviewed relevant documentation and interviewed agency officials. We also had the agencies produce the queries or data extracts they used to generate the data we requested. Also, we reviewed the related queries, data extracts, and the output for logical consistency. We determined these data to be sufficiently reliable for use in our report. Finally, we also requested information from each state regulator on consumer complaint, violation, and enforcement data pertaining to bank fees and disclosures, state specific bank examination processes, and any additional state laws pertaining to bank fees and disclosures. We did not receive all our requested data because some states’ systems did not capture complaint, violation, or enforcement data related to bank fees and disclosures. For those states where information was available, the number of complaints and violations were minimal and not consistently reported among states. We, therefore, attributed the limited information on complaints, violations, and enforcement actions to state officials and did not assess the reliability of this data. To assess the extent to which consumers, upon request prior to opening a checking and savings account, are provided disclosures of fees and the conditions under which these fees are assessed, GAO employees visited 103 bank branches, 36 thrift branches, and 46 credit union branches of 154 depository institutions throughout the nation. We selected these institutions to ensure a mix of institution type (bank, thrift, and credit union) and size; however, the results cannot be generalized to all institutions. We reviewed the federal Truth-in-Savings Act (TISA) and Regulation DD, which implements TISA, to determine what disclosure documents depository institutions were required to provide to new and potential customers. Using a standardized, prescribed script, GAO employees posed as consumers and specifically requested a comprehensive fee schedule and terms and conditions associated with checking and savings accounts. The branches were located in the following cities: Atlanta, Georgia; Boston, Massachusetts; Chicago, Illinois; Dallas, Texas; Dayton, Ohio; Denver, Colorado; Huntsville, Alabama; Los Angeles, California; Norfolk, Virginia; San Francisco, California; Seattle, Washington; and Washington, D.C. The GAO employees visiting these branches also reviewed the institutions’ Web sites to determine if these sites had comprehensive fee schedules and terms and conditions associated with checking and savings accounts. After both visiting branches and reviewing Web sites, GAO employees used standardized forms and recorded whether or not they were able to obtain the specific documents (examples were provided) and whether or not they were able to locate specific information on each institutions’ Web site. To obtain information on issues related to providing consumers with real- time account information during debit card transactions at point-of-sale terminals and automated teller machines (see app. II), we reviewed available literature from the Federal Reserve, including a 2004 report on the issues in providing consumers point-of-sale debit card fees during a transaction. We also reviewed other sources that described the payment processing system related to debit card transactions at merchants and ATMs. In addition, we conducted structured interviews with officials from five banks, two card associations, three third-party processors, four bank industry associations, and one merchant trade organization, and summarized our findings. We conducted this performance audit in Atlanta, Georgia; Boston, Massachusetts; Chicago, Illinois; Dallas, Texas; Dayton, Ohio; Denver, Colorado; Huntsville, Alabama; Los Angeles, California; Norfolk, Virginia; San Francisco, California; Seattle, Washington; and Washington, D.C., from January 2007 to January 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. According to debit card industry representatives we contacted, providing consumers with their “real-time” account balance information during a debit card transaction is technically feasible but presents a number of issues that would need resolution. These issues include the costs associated with upgrading merchant terminals and software to allow for consumers’ account balances to be displayed at the terminals; the potential difficulty of determining a consumer’s real-time account balance, given the different types of transactions that occur throughout the day; concerns over privacy and security raised by account balances potentially being visible to others besides account holders; and the increased time it would take to complete a transaction at merchant locations. A consumer using a debit card to make a purchase at a merchant’s checkout counter (referred to as a point-of-sale debit transaction) has two options for completing the transaction: (1) entering a personal identification number (PIN) or (2) signing for the transaction (similar to a credit card transaction). The consumer is typically prompted at the point- of-sale terminal to choose either “debit” (in which case the transaction is referred to as “PIN-based”) or “credit” (in which case the transaction is referred to as “signature-based”). Regardless of which option the consumer chooses, the transaction is a debit card transaction. PIN- and signature- based debit card transactions differ not only with respect to the input required from the consumer but also the debit networks over which the transactions are carried and the number and timing of steps involved in carrying out the transactions. Similarly, transactions initiated at ATMs can differ in how they are processed. Customers can make withdrawals and deposits not only at ATMs owned by their card-issuing institutions but also at ATMs owned by other depository institutions or entities. An ATM card is typically a dual ATM/debit card that can be used for both ATM and debit card transactions, both PIN-based and signature-based, at participating retailers. PIN-based debit card transactions are referred to as “single message” because the authorization—the approval to complete the transaction— and settlement—the process of transmitting and reconciling payment orders— of the transaction take place using a single electronic message. As shown in figure 7, PIN-based debit card transactions involve a number of steps between the merchant’s terminal and the consumer’s deposit account. Generally, at the locations of large national merchants, after the consumer has swiped the card a message about the transaction is transmitted directly to the electronic funds transfer (EFT) network. (For other merchants, the transaction reaches the EFT network via the merchant’s processor, also known as the merchant acquirer.) The message identifies the consumer’s institution and account, the merchant, and the dollar amount of the purchase. The EFT network routes the transaction to the card issuer (or to the card issuer’s processor, which then passes it to the card issuer). The card issuer—usually the consumer’s depository institution—receives the message and uses the identifying information to verify that the account is valid, that the card has not been reported lost or stolen, and that there are either sufficient funds available in the account or the account is covered by an overdraft protection program (that is, the issuer covers the transaction even if there are insufficient funds in the account, which is also known as bounce protection). If these conditions are met, the issuer authorizes the debit transaction. Specifically, the issuer then debits the consumer’s account and sends an authorization message to the EFT network, which sends it to the merchant’s acquirer, which forwards the authorization to the merchant’s terminal. The entire sequence typically occurs in a matter of seconds. Signature-based debit card transactions involve two electronic messages: one to authorize the transaction and another to settle the transaction between the merchant and the card issuer, at which time the consumer’s account is debited. To conduct a signature-based debit card transaction, the customer typically has a VISA- or MasterCard-branded debit card linked to a deposit account. As shown in figure 8, after the card is swiped, a message about the transaction travels directly (or indirectly, through the merchant’s acquirer) to the VISA or MasterCard network, from which the transaction proceeds directly (or indirectly, through the card-issuing institution’s processor) to the card-issuing institution. As in a PIN-based debit card transaction, if the issuer verifies the relevant information, it authorizes the transaction and routes it back through the VISA or MasterCard network to the merchant’s acquirer with the authorization. The merchant acquirer then forwards the authorization to the merchant’s terminal, and the consumer signs the receipt. The settlement of the transaction between the merchant and card issuer (and the actual debiting of the consumer’s account) occurs after a second message is sent from the merchant to the issuer, usually at the end of the day. The steps involved in ATM transactions depend upon whether a consumer is using an ATM owned by the issuer of his or her card (typically referred to as a “proprietary” ATM), or an ATM owned by a depository institution or entity other than the card-issuing institution (typically referred in the industry as a “foreign ATM”). A foreign ATM transaction is processed in essentially the same manner as a PIN-based debit card transaction, with one exception: the ATM operator (or its processor) routes the transaction to the EFT network, which then routes it to the card issuer. The card- issuing institution authorizes the transaction via the EFT or debit card networks. In contrast, when a consumer uses a proprietary ATM, the transaction stays within the issuer’s network and does not require the use of an external EFT network (fig. 9). Card issuers that are depository institutions—such as banks—may have the capability of providing a notice to their customers at a proprietary ATM that a withdrawal will result in the account being overdrawn and then allow the customer to decide whether or not to proceed with the transaction. Officials from one of the banks that we spoke with stated that they employed this capability at their proprietary ATMs. As of March 2007, there were over 5 million point-of-sale terminals in the United States. According to industry representatives, most point-of-sale terminals are not currently equipped to display a consumer’s checking account balance and, in these cases, merchants would either need to replace the terminal entirely or upgrade the software in the terminal. Industry representatives were hesitant to estimate the costs associated with this because the number of terminals that would need to be replaced versus those that would only need a software upgrade is not currently known. The industry representatives explained that the cost of upgrading the point-of-sale terminals to display account balance information would be primarily borne by merchants. In addition to upgrading point-of-sales terminals, industry representatives identified the following other costs that would be incurred: Upgrading software used by the EFT networks and depository institutions in order to transmit balance information from the card- issuing institution to the merchant. As described above, currently a debit card transaction is authorized by verifying a consumer’s checking account balance and sending back an approval or denial message— which does not include account balance information. Increasing the communications infrastructure of the EFT networks to allow for additional message traffic, namely consumers’ acceptances or declinations of a transaction once they have viewed their account balances. These messages would constitute a second message from the point-of-sale terminal to the card-issuing institution for each transaction. An associated cost with this process would be training employees who work at the terminals how to handle these debit card transactions and the cost of additional time to accomplish transactions, which we discuss here. With respect to providing account balance information at foreign ATMs, one industry representative explained that this would require all entities involved in ATM transactions (banks, ATM operators, ATM networks, and the Federal Reserve) to agree on a common message format to display balances, as well as a new transaction set for ATMs that would provide consumers with the option not to proceed with the transaction once they saw their balances. Two industry representatives we spoke with said that it could take a number of years for all of the entities involved in ATM transactions to agree on a standard format. Debit card industry representatives explained that the account balance that is used to authorize a debit card transaction—and which would be displayed to the consumer—may not necessarily reflect the true balance in the consumer’s checking account at the time of the transaction. One of the reasons for this is that, while a depository institution may attempt to get as close to a real-time balance as possible, it may be unable to capture all of the transactions associated with the account as they occur throughout the day. For example, one depository institution official told us that it updates its customers’ account balances throughout each day; it refers to these updated balances as a customer’s “available balance.” This available balance is updated throughout the day to reflect debit card transactions at point-of-sale terminals and ATMs, as well as other transactions such as those that occur online. This balance, however, might not take into account checks that will be clearing that day, deposits made at a foreign ATM, or some transactions that would come in via the Automated Clearing House (ACH). An example of the latter is a transaction in which a consumer electronically transfers funds from a mutual fund to a checking account. The net result of the inability to provide consumers with a real-time balance is that the consumer may be presented with a balance that is not reflective of all the transactions that will be processed as of that day. Another reason why a depository institution may be unable to provide consumers with a real-time balance is that the institution may not update balances throughout the day. Most institutions “batch process” transactions at night, then post the revised customer account balances. The following day, the institutions update the customer’s account balance for debit card authorizations and certain other transactions that occur throughout the day. However, according to a card association, some small banks only post the account balance from the batch process to the customer’s account and do not update account balances as transactions occur throughout the day. Finally, if a depository institution uses a third-party processor to authorize debit card transactions, the balance that the third-party processor uses may also not reflect all the transactions that occur throughout the day. For example, transactions involving a bank teller, such as deposits or withdrawals, do not require a third-party processor to authorize transactions, thus the processor would not be able to update its balance to reflect these transactions. One of the major concerns raised by the debit card industry representatives we spoke with regarding providing consumers with real-time balances at point-of-sale terminals was a concern over privacy. Unlike ATM transactions, which are transactions between a consumer and the machine, under which consumers tend to be cognizant of the need for privacy, point- of-sale terminals are generally more visible to others, according to these representatives. For example, the balance on a point-of-sale terminal could be visible to the cashier and customers in line at a merchant location. In addition, at restaurants, the waiter or other staff could view this information out of sight of the consumer. The industry representatives stated that most consumers would likely be uncomfortable having their account balance information visible to others. Another related concern raised by these representatives was one of security, in that cashiers or possibly other customers might be able to view a consumer’s account balance. Thus, the industry representatives stated that providing balances at a point-of-sale terminal could increase the risk of fraud. One industry representative told us that providing a balance at a point-of-sale terminal would be a departure from current privacy and security approaches with point-of-sale transactions. Industry representatives explained that allowing consumers to accept or decline a transaction once they have viewed their balance would likely increase the time it takes to get customers through a check-out line. According to a retail merchant’s trade association that we contacted, merchants depend on moving customers quickly through check-out lines. The retail merchants’ trade association stated that adding a step in the check-out process would add time, resulting in lower sales volume per unit of time for each cashier, and potentially greater costs associated with adding cashiers to maintain the same volume of transactions. Industry officials also stated that there were some circumstances during a point-of-sale transaction for which providing consumers with real-time balances would not be possible or would be problematic. For example, during “stand-in” situations, such as when a card issuer’s systems are offline for maintenance, EFT networks review and authorize (or deny) transactions in accordance with instructions from the issuer. The networks would not have real-time access to account balance information when the issuer’s system is down. Another example would be merchants, such as fast food outlets, who perform quick swipes of debit cards for low dollar transactions. At the time of the swipe, the merchant has not actually routed the transaction to the card issuer and thus has not yet accessed the consumer’s account balance. In these cases, the merchant has accepted the risk of not being paid if there are insufficient funds in the account in order to move customers through lines more quickly. Finally, one industry representative questioned how the industry would be able to provide consumers with real-time balances if consumers make debit card purchases online or over the telephone. There are other options short of providing real-time account balances at point-of-sale terminals and ATMs that might assist in warning consumers of a potential overdraft, but each of these options has challenges and limitations. For example, one option involves sending a warning with the authorization message instead of a real-time balance. The warning would indicate that the transaction could result in an overdraft. As indicated above, one of the banks we met with currently provides a similar warning on its proprietary ATMs. The consumer would then have the option to accept or deny the transaction. This option would require two messages to complete a debit card transaction rather than one message. Further, under this option, depository institutions would still be unable to base their authorization decisions on a real-time balance because of the various types of transactions that may occur in a day, and thus no warning message would be triggered—yet once the institution reconciles all accounts, a consumer could be faced with an overdraft fee. This option would also likely slow down transactions and raise costs for merchants. However, unlike providing real-time account balance information at a point-of-sale terminal, this option would not present privacy or security concerns because the balance in the consumer’s account would not be transmitted. Another option short of providing consumers with real-time account balance information is printing a consumer’s available balance on a receipt after a transaction has been completed. This is currently possible when consumers use their card issuers’ proprietary ATMs and some foreign ATMs, according to industry representatives. Under this option, the consumer would not receive a warning that the transaction could subject them to an overdraft, and they would not have a choice to accept or decline the transaction. Further, under this option, the consumer would not be provided his or her account balance until after the transaction was completed. However, once consumers obtained their balance, they could change their spending behavior to avoid a fee on subsequent transactions. This option would entail certain costs for upgrading terminals or software in order to print the consumer’s real-time balance on the receipt, as well as costs of upgrading software to transmit the real-time balance from the card-issuing institution to the merchant terminal. The option would not address an institution’s ability to provide an actual real-time balance and would introduce privacy and security concerns because if the receipt were inadvertently dropped, others could view the balance. However, this option would not slow down the time it takes to complete a transaction because the consumer would not be given the option of accepting or declining a transaction. Finally, industry representatives noted that consumers currently have a number of ways to check their account balances (e.g., by phone and Internet), which might help them avoid overdraft fees. According to Federal Reserve officials, this would require “near-time” processing and a system that synchronizes the balance information reported through the phone and Internet banking systems with the balance information that is transmitted by the institution to the ATM/EFT network. Three of the four large banks we spoke with stated that their customers currently have the ability to sign up for a feature in which the bank will send a message to the consumers’ e-mail accounts or cell phones—“E-alerts”—when their balances reach a designated “threshold” amount. Under this option, consumers receiving an E-alert could change their spending patterns to avoid incurring an overdraft situation and fees. Table 6 compares the E-alert option with other potential options for warning consumers that they may incur an overdraft fee and the associated issues surrounding the particular option. Using the methodology we noted earlier, we analyzed select bank fee data obtained from two firms, Moebs $ervices and Informa Research Services. Some bank fees have increased since 2000, while a few, such as monthly fees, have decreased. As noted earlier in the report, we analyzed data in aggregate for all depository institutions and also by institution type and size. According to data we obtained, banks and thrifts charged more than credit unions for almost all select fees analyzed, and larger institutions charged higher fees than midsized and smaller institutions. We found slight variations in fees charged by region, with certain regions charging less than the national average for some select bank fees analyzed. For example, California and the Western United States consistently charged less than the national average for almost all select fees analyzed according to the Informa Research Services data. For both the Moebs $ervices and Informa Research Services data, banks and thrifts were combined into one institution type category, with credit unions as the other institution type. For both sets of data, the following asset size categories were used: small institutions had assets less than $100 million, midsized institutions had assets between $100 million and $1 billion, and large institutions had assets greater than $1 billion. For the Moebs $ervices data, we computed average amounts ourselves, but statistics were provided to us for the Informa Research Services data. We identified all instances in which the information presented was based on data provided by less than 30 institutions and did not include those instances in this report because averages based on a small number of institutions may be unreliable. The information presented for the Moebs $ervices data is statistically representative of the entire banking and credit union industry, but the Informa Research Services data is not. For additional information on the select fees analyzed and the number of institutions surveyed, see appendix I. Table 7 provides a detailed comparison of the Moebs $ervices data for all institutions for select bank fees for the 8-year period, 2000–2007. Table 8 provides a detailed comparison of the Informa Research Services data for all institutions for select bank fees for the 7-year period, 2000– 2006. The data is presented for a variety of types of checking and savings accounts. In analyzing the resolution of complaints for fees and disclosures associated with checking and savings accounts, we found similar outcomes among complaints received by the Federal Reserve, FDIC, OCC, and OTS. As shown in figure 10, these federal regulators reported resolving complaints in the following order of decreasing frequency: 1. Finding that the bank was correct. This included instances in which the regulator determined that the financial institution did not err in how it administered its products and/or services to the consumer. 2. Providing the consumer with additional information without any determination of error. This included instances in which the regulator told the consumer that the dispute was better handled by a court or where the regulator determined that rather than wrongdoing there was miscommunication between the bank and its customer. 3. Other, including instances in which the consumer did not provide information needed by the regulator or withdrew the complaint. 4. Determining that the bank was in error. This included instances in which the regulator determined that the bank erred in administering its products and/or services to the consumer (errors could include violations of regulations). 5. Complaint in litigation, in which the regulator tabled the complaint because it was involved in legal proceedings. This includes instances in which the regulator can not intervene because the issues raised in the complaint are the subject of either past, current, or pending litigation. In addition to the individual named above, Harry Medina, Assistant Director; Lisa Bell; Emily Chalmers; Beth Faraguna; Cynthia Grant; Stuart Kaufman; John Martin; Marc Molino; José R. Peña; Carl Ramirez; Linda Rego; and Michelle Zapata made key contributions to this report. | In 2006, consumers paid over $36 billion in fees associated with checking and savings accounts, raising questions about consumers' awareness of their accounts' terms and conditions. GAO was asked to review (1) trends in the types and amounts of checking and deposit account fees since 2000, (2) how federal banking regulators address such fees in their oversight of depository institutions, and (3) the extent that consumers are able to obtain account terms and conditions and disclosures of fees upon request prior to opening an account. GAO analyzed fee data from private data vendors, publicly available financial data, and information from federal regulators; reviewed federal laws and regulations; and used direct observation techniques at depository institutions nationwide. Data from private vendors indicate that average fees for insufficient funds, overdrafts, returns of deposited items, and stop payment orders have risen by 10 percent or more since 2000, while others, such as monthly account maintenance fees, have declined. During this period, the portion of depository institutions income derived from noninterest sources--including fees on savings and checking accounts--varied but increased overall from 24 percent to 27 percent. Changes in both consumer behavior, such as making more payments electronically, and practices of depository institutions are likely influencing trends in fees, but their exact effects are unknown. Federal banking regulators address fees associated with checking and savings accounts primarily by examining depository institutions' compliance with requirements, under the Truth in Savings Act (TISA) and its implementing regulations, to disclose fee information so that consumers can compare institutions. They also review customer complaints but do not assess whether fees are reasonable. The regulators received relatively fewer consumer complaints about fees and related disclosures--less than 5 percent of all complaints from 2002 to 2006--than about other bank products. During the same period, they cited 1,674 violations of fee-related disclosure regulations--about 335 annually among the 17,000 institutions they oversee. GAO's visits to 185 branches of 154 depository institutions suggest that, despite the disclosure requirements, consumers may find it difficult to obtain information about checking and savings account fees. GAO staff posing as customers were unable to obtain detailed fee information and account terms and conditions at over one-fifth of visited branches and also could not find this information on many institutions' Web sites. Federal regulators examine institutions' written policies, procedures, and documents but do not determine whether consumers actually receive disclosure documents. While consumers may consider factors besides costs when shopping for accounts, an inability to obtain information about terms, conditions, and fees hinders their ability to compare institutions. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The United States Constitution established a union of states that provided for national and state government and gave each its own authority and sphere of power. However, it also allows for these spheres to overlap and thus creates areas of concurrent power in which either level of government or both may regulate. Examples include the power to regulate commerce and the power to tax. Within these areas there may be situations in which laws conflict. To resolve these conflicts, the Constitution’s Supremacy Clause provides that federal law is “the Supreme Law of the Land,” thereby preempting state law. Preemption occurs when the Congress enacts a statute or a federal agency adopts a regulation in an area in which state legislatures have acted or have the authority to act. The Congress’s constitutional power to regulate interstate commerce has proven the source of many preemptive statutes. The balance between federal and state government in areas of concurrent powers has been continuously debated and has shifted as political, social, and economic conditions have changed over the years. Within the 20th century, for example: The Great Depression of the 1930s led to an expanded federal role in domestic affairs to deal with social and economic problems that states could not respond to effectively on their own. The Great Society programs of the 1960s brought further expansion of the federal role in an effort to achieve socially desirable outcomes and to the use of state and local governments as intermediaries to implement national policies in areas that had previously been the purview of state or local governments or the private sector. The 1980s brought a shift of funds, authority, and responsibility to the states through block grants, such as the Social Services Block Grant, which allowed greater state and local autonomy and flexibility in fashioning local strategies to address federal objectives. The trend toward state flexibility continued in the 1990s, accompanied by concern about “unfunded mandates” (federal regulations that impose new duties on states—duties that require state expenditures). At the same time, the emergence of the Internet and the increasingly national and international nature of commerce created pressures for federal regulation. Questions of federal and state responsibility in areas of common regulatory concern continue to spark debate in the twenty-first century, as evidenced by the examination of state-regulated voting procedures following the 2000 presidential election and of federal and state homeland security responsibilities following the terrorist attacks of 2001. Such questions are also likely to arise during reauthorization debates on existing programs. To identify mechanisms for focusing state efforts toward national regulatory objectives, we reviewed the literature on federalism, intergovernmental relations, preemption, and regulatory programs in a broad range of policy areas. We then examined programs and approaches that combined federal with state regulation or implementation. Our review included both programs that involved preemption and programs that used other approaches to enlist state effort in support of federal or national regulatory objectives. Five major mechanisms emerged from our review. We classified programs in terms of these mechanisms and selected two or more programs representing each mechanism for more detailed examination and to serve as examples. We selected examples from a broad range of regulatory agencies with the aim of including major programs as well as a variety of approaches. (The programs we selected are summarized in table 1). For the mechanism concerning grants, we looked at grant programs but did not examine other forms of federal support. Our study focused on the mechanisms and did not review the content or strategic approaches of the regulations and standards involved or the effectiveness of the programs as implemented. To obtain descriptive material concerning each of these programs we reviewed authorizing statutes, regulations, agency documents, and documents concerning quasi-official standard-setting bodies. We also examined reports from program studies conducted by the Congressional Research Service, GAO, inspectors general, and other sources. We did not conduct new analyses of these programs. Thus, our findings are based on available information. It should be noted that each of the mechanisms we describe represents an ideal type—that is, the elements listed for each mechanism are characteristic of that mechanism and define a “pure case” to which specific programs can be compared. In the real world, few, if any, programs will match the “pure case” completely, and a complex program may incorporate more than one mechanism. We conducted our work between June 2001 and February 2002 in accordance with generally accepted government accounting standards. Because our work drew only on already available materials, we did not seek agency comments on our findings. We identified five regulatory or standard-setting mechanisms and four patterns of implementation or enforcement that characterize areas in which the federal government and the states share regulatory objectives and responsibilities. The five mechanisms are: fixed federal standards that preempt all state regulatory action in the subject area covered; federal minimum standards that preempt less stringent state laws but permit states to establish standards more stringent than the federal; inclusion of federal regulatory provisions in grants or other forms of assistance; cooperative programs in which voluntary national standards are formulated by federal and state officials working together; and widespread state adoption of voluntary standards formulated by quasi- official entities. The first two mechanisms involve preemption. The other three represent alternative approaches. The mechanisms differ in terms of which level of government sets standards and whether application of the standards within a state is voluntary or mandatory. The mechanisms also offer different options with respect to implementation or enforcement. Fixed federal standards and minimum federal standards permit three patterns of implementation: (1) direct implementation by the federal agency, (2) implementation by the states, approved by and under some degree of oversight by the federal agency, and (3) a combination of federal agency and federally approved state implementation. Grants follow the second of these patterns. The remaining two mechanisms follow a fourth pattern, direct implementation by the state under its own authority. These three mechanisms vary in the degree of federal oversight they can accommodate. Standard-setting mechanisms and implementation options in the programs we reviewed form combinations as illustrated in table 2. We will discuss each mechanism, implementation options associated with that mechanism, and operational issues that have arisen in the programs we reviewed. The federal government sometimes assumes sole regulatory authority over a specified subject area, either by prohibiting states from regulating or by issuing federal regulations that states must follow. Both statutes and treaties can preempt in this way. When federal statutes indicate that Congress intended the federal government to assume sole regulatory authority over a specific subject area, states cannot establish either stricter standards or standards that are less strict than the federal. A program under federal regulatory authority may involve (1) no state role, (2) a parallel state regulatory and implementation role, or (3) state implementation of the federal regulatory provisions or standards. In some instances, the federal government both regulates and assumes responsibility for enforcement or implementation—states do not perform either function. In addition to establishing uniform standards nationwide, this approach establishes a single locus of accountability and program direction. The federal agency that administers the program provides the resources and bears the costs. Regulation of employer-based pension plans, pursuant to the Employee Retirement Income Security Act of 1974 (ERISA), provides an example. Federal standards were viewed as needed in light of the importance of these plans to interstate commerce and the need to protect employees and their beneficiaries from loss of benefits due to unsound or unstable plans. ERISA established, among other requirements, fiduciary, reporting, and disclosure requirements that apply to private employee pension plans in the United States. ERISA supersedes all state laws that relate to ERISA pension plans. The Pension and Welfare Benefits Administration (PWBA) of the Department of Labor is responsible for administering and enforcing these ERISA provisions, and states do not have an enforcement role. In other instances, regulatory authority is divided between the federal government and the states. States retain the power to establish and implement regulations for their portion of the sector but are precluded from applying them within the federal portion. Regulation of health coverage under ERISA provides an example. The federal government regulates all employee health plans. If an employer chooses to provide coverage through an insurance policy, that policy is subject to state regulations. Approximately 60 percent of individuals participating in employer sponsored plans are covered by state regulated insurance policies. This division of authority can lead to potential differences in coverage requirements, uncertainty, and litigation. As a result, individuals in similar plans may have different rights and remedies. Federal legislation in certain health coverage standards in recent years has set federal minimum standards that generally apply to all health plans. In some program areas, states implement or enforce federal standards that preempt state laws or regulations. The Hazardous Materials Transportation Act, as it applies to motor vehicles, illustrates this approach. In order to provide adequate nationwide protection against the risks to life and property inherent in the transportation of hazardous materials, this act authorizes the Secretary of Transportation to regulate the transportation of such hazardous materials not only in interstate and foreign commerce but also in intrastate commerce. The act and federal regulations prescribed under the act generally preempt state requirements that are not substantively the same as the federal. However, most of the roadside commercial vehicle inspections applying hazardous material (HAZMAT) regulations are done through state programs. Under the Motor Carrier Safety Assistance Program (MCSAP), states that meet grant requirements can take responsibility for enforcing these regulations for both intrastate and interstate vehicles, and nearly all states have done so.These requirements include adopting state HAZMAT transportation regulations identical to the federal regulations for commercial vehicles and having the legal authority and resources to enforce them. In return, the states receive federal grants to cover a portion of their program costs. Enlisting the efforts of state agencies greatly expands the resources available for implementation or enforcement. In each of the programs we examined, activities to support federal regulations built upon activities that states already performed. However, this strategy also raises some major operational issues or questions, for example: Who will carry out enforcement activities in states that are unwilling or unable to do so? What share of state program operation costs, if any, should the federal government cover? Which level of government is accountable for ensuring that state performance is adequate, and for taking action if it is not? Is uniformity of enforcement important, and if so how can it be achieved? State implementation was an option under each of the five standard- setting mechanisms we examined. We discuss this option with respect to each mechanism, and include a summary discussion of its advantages and limitations in the final section of the report. Under minimum federal standards, the federal government, through statutory or regulatory means, establishes a minimum national standard that preempts less stringent and conflicting state laws and regulations. Minimum standards are often designed to provide a baseline of consumer protection in areas such as environmental protection, health care, food supply, vehicle safety, and working conditions. This mechanism supports the achievement of a national objective while at the same time permitting states that wish to set higher standards to do so. States typically participate in enforcing the federal regulations, as well as any regulations of their own, and share in the cost. Grant programs or other forms of support can also be used to direct state efforts toward federal regulatory purposes. Under this mechanism, the grant or other instrument requires recipients to perform federally specified regulatory or enforcement activities as a condition of eligibility to receive support. These requirements apply only in states that voluntarily accept the support. However, if the grant in question is a significant source of funds for a state, then nonparticipation may not be a practical alternative. Historically, conditions of the type described above have been incorporated in federal grants to states that focused on a particular purpose and population—termed categorical grants. Such grants also included administrative and reporting requirements to help ensure both financial and programmatic accountability. Categorical grants can be contrasted to block grants, which are aimed at achieving a broad national purpose, afford states considerable flexibility, and have limited administrative and reporting requirements. In practice, the line between “categorical” and “block” grants has become blurred, and many programs include features of both. We examined several recent examples that illustrate how regulatory components aimed at directing states’ efforts toward specific national objectives have been incorporated into grants that otherwise give states the broad flexibility of a block grant. The Synar Amendment to the Public Health Service Act (Synar Amendment) illustrates the use of grant conditions to induce states to have and enforce laws consistent with a federal regulatory purpose— restricting access to tobacco by underage youth. In another example, the Temporary Assistance for Needy Families (TANF) block grant illustrates the use of performance-oriented federal regulatory provisions in a program that otherwise gives states new flexibility in welfare program operation. Finally, the Elementary and Secondary Education Act (ESEA) amendments of 1994, Public Law 107-110, exemplify an effort to achieve comparably challenging standards nationwide by requiring each state that accepts a Title I ESEA grant to set and enforce its own standards. The Synar Amendment, passed in 1992, added regulatory conditions to the Substance Abuse Prevention and Treatment (SAPT) block grant with the national objective of reducing underage youths’ access to tobacco products. In order to receive a SAPT block grant, a state must have and enforce a law prohibiting the sale or distribution of tobacco products to any individual under the age of 18. The state is required to report annually on enforcement activities and on the extent to which the availability of tobacco products to underage youth has been reduced. A state’s grant funds can be reduced if the state fails to meet a target compliance rate negotiated with the Department of Health and Human Services (HHS). The use of the grant mechanism to regulate underage access to tobacco reflects the status of tobacco regulation at the time the Synar Amendment was passed. HHS had authority, through the SAPT block grant, to fund activities aimed at preventing abuse of alcohol and other drugs. Adding the Synar Amendment requirements to the grant enabled the Congress to make use of existing state authority to ensure that states’ substance abuse prevention activities were directed toward achieving a particular national public health objective. In 1996, welfare reform legislation, known as the Personal Responsibility and Work Opportunity Reconciliation Act, Public Law 104-193, replaced previous assistance programs with a single block grant called Temporary Assistance for Needy Families (TANF). TANF was expressly intended to increase states’ flexibility in welfare program operation. TANF gives states broad flexibility to determine eligibility, methods of assistance, and benefit levels as long as funds are directed to achieving the purposes of the legislation. Unlike the classic block grant, however, TANF couples this flexibility with federal regulatory provisions that states must apply, such as a 60-month limit on a parent’s receipt of assistance. TANF also includes accountability requirements that link state performance to the purposes expressed in the legislation, among them detailed, results-oriented state reporting requirements; financial penalties for failure to submit timely reports, meet certain financial requirements, or achieve minimum work participation rates; and bonuses for performance. These requirements, like those in the Synar Amendment, are similar to those that states must meet under preemptive regulatory programs. However, there is an important difference between federal fixed and minimum standards and those mechanisms based solely on assistance. Federal fixed and minimum standards apply to and must be implemented in every state, with the federal agency implementing the program directly if the state does not do so. Regulatory conditions imposed by means of acceptance of grants or other forms of assistance apply only to states that accept the assistance. If a state elects not to participate in the grant program, the federal standards contained in the grant do not apply in that state and the federal agency that administers the grant program does not step in to implement them. Thus, the condition-of-assistance mechanism may lead to gaps in coverage. Federal interest in avoiding such gaps gives states some leverage to negotiate for acceptable conditions or for limiting the existence and application of federal sanctions. Amendments to Title I of the Elementary and Secondary Education Act (ESEA) of 1994 illustrate the use of grant conditions to induce states to establish their own standards in the interest of achieving a national objective. That objective is to ensure that students served through the grant (which is targeted to the disadvantaged) are offered the same challenging content as students in the state generally and are held to the same performance standards. Under the 1994 law, states that received grant funds were required to develop and implement challenging content standards that apply to all students; develop assessments aligned to those standards; and, based on these assessments, develop procedures for identifying and assisting schools that fail to make adequate progress toward helping students meet these standards. The notable feature here is that while the requirement to develop and implement standards was federal and induced states to adopt a federally designated approach to school reform, the standards themselves were to be set by each state. There was no expectation of national uniformity and no federal minimum—only the criterion that every state’s standards should, in its own judgment, be “challenging.” Similarly, the legislation included federal accountability requirements, reflecting concern that federal funds spent on education had not sufficiently narrowed the gap between disadvantaged students and others in the past. However, it provided for each state to set its own definition of what constitutes “adequate yearly progress” (AYP), which is key to identifying low performing schools and districts that are targeted for improvement. Experience to date with these grant provisions illustrates the dilemmas— from the federal perspective—of relying on state-developed standards. While nearly all states had established content standards by January 2001, outside groups that reviewed these standards observed that they varied considerably in clarity and specificity and that some could not be considered rigorous. In addition, states differed in how they defined and measured “low performing schools.” This led to substantial differences in the numbers and percentage of schools identified as needing improvement, such that schools with comparable levels of student performance could be targeted for improvement in one state but not in another. These variations directly reflect the kinds of flexibility that were built into the Title I legislation. However, the variations—and states’ slowness in devising adequate assessments—generated concerns. The ESEA reauthorization in the winter of 2002 incorporated new or expanded requirements for the Title I program, many of them aimed at strengthening accountability for results. In addition to requiring states that accept grant funds to conduct annual assessments in mathematics and reading or language arts in grades 3 through 8 by the 2005-2006 school year, the law specifies how states must define AYP, details the steps that states and local education agencies must take with respect to schools that fail to make adequate progress, and lists the options they must offer to students in such schools. The law also requires states to develop a plan to ensure that, by the end of the 2005-2006 school year, all teachers teaching core academic subjects within the state are highly qualified. Although it significantly expanded the federal role in education, the 2001 legislation also acknowledges the state role. It prohibits federal mandates, direction, or control over a state, local education agency, or school’s instructional content, academic achievement standards and assessments, curriculum, or program of instruction, and it gives states and school districts greater flexibility in how they use federal funds. In addition, the law establishes a negotiated rulemaking process, directing the Secretary of Education to obtain advice and recommendations from state and local administrators, board members, education professionals, parents, and others involved in implementation before issuing proposed federal regulations for the program. We found examples within the Food and Drug Administration (FDA) of programs in which national standards for food safety are developed by agency and state officials acting together. The general mechanism is a cooperative body that develops proposed standards. Those that are approved are incorporated as guidance to states in carrying out inspection and enforcement procedures. Such nonbinding guidance does not preempt state law or have the binding force of federal law or regulation. States conduct enforcement activities under their own authority, and FDA provides training, program evaluation or audits, and technical assistance to state agencies. Within this general design, there are variations. For example: In the Retail Food Protection Program, guided by FDA’s Food Code, the standards development body is the Conference for Food Protection, a nonprofit organization that brings together federal, state, and local regulators, academics, and representatives of industry and consumer groups. The conference submits recommendations on Food Code issues to the FDA; the FDA then reviews the recommendations and either accepts or turns back for further discussion. States are encouraged (but not required) to adopt the Food Code as the basis for their own regulation of retail food establishments such as grocery stores, restaurants, cafeterias, and vending machines. Adoption by a significant number of jurisdictions generally has taken 3 to 5 years. The National Shellfish Sanitation Program (NSSP) reflects policies developed by the Interstate Shellfish Sanitation Conference (ISSC), whose members represent states, the industry, and several federal agencies (FDA, EPA, and the National Marine Fisheries Service). All representatives participate in developing standards, but only the states vote in the general assembly, and FDA must sometimes compromise to get an issue approved or accept defeat of its proposals. FDA must concur with ISSC’s proposed policy changes before they are incorporated into the program’s catalogue of safety procedures, referred to as the model ordinance. States agree to enforce the requirements of the model ordinance through their participation in the NSSP and ISSC. The FDA conducts program audits to ensure compliance with NSSP policy and applicable federal regulations, but its oversight activities are subject to resource, data, and other limitations. The cooperative programs are unlike others within FDA in that they reflect FDA’s statutory authority under the Public Health Service Act, which directs FDA to assist states in the prevention of communicable diseases and advise them on the improvement of public health. Under the cooperative program mechanism, as under the grant mechanism, states have the primary responsibility and authority for implementing federally approved standards—and a key role in framing them as well. Adoption of the standards is voluntary unless states have bound themselves to adopt, as in the shellfish program. There are two major drawbacks to this mechanism from a federal perspective. First, voluntary adoption does not necessarily provide nationwide application of a common standard, as some states may choose not to adopt. Second, the federal agency’s limited role gives it little leverage over states that do not adequately protect their citizens. There is the added challenge of how to apply crosscutting food safety regulations such as the Hazard Analysis and Critical Control Points (HACCP) process control system, which the Department of Agriculture now requires for meat and poultry processing. FDA has mandated HACCP for all seafood production, including molluscan shellfish. Although seafood retailers are exempt from the HACCP regulations, the 1997 edition of the Food Code encourages them to apply HACCP-based food safety principles. We found federal-state cooperation in framing highway design standards as well. Through the National Cooperative Highway Research Program (NCHRP), DOT’s Federal Highway Administration (FHWA) cooperates with the American Association of State Highway and Transportation Officials (AASHTO)—an organization of state officials in which DOT is a nonvoting member—to support highway research. Drawing on these results and on task force efforts, AASHTO produces manuals, guidance, and specifications regarding highway design, safety, maintenance, and materials. FHWA supports the cooperatively produced materials. In contrast to the FDA, FHWA does not itself issue the guidance documents—they are published by AASHTO and incorporated into federal regulations by reference. Thus, the highway design example falls on the border between a cooperative program and our next category, reliance on standards produced by nonfederal entities. Our discussion thus far has focused on regulatory standards that are developed by the federal government itself or in cooperation with states. However, a variety of other entities also develop standards and model ordinances covering subject areas within federal and state regulatory authority. Some of these entities focus on producing model state laws or regulations. When adopted by a sufficient number of states, these standards may provide a uniform approach and virtually national coverage without federal regulation. In addition, numerous private organizations such as Underwriters Laboratories and the National Fire Protection Association set national or international standards for a given material, product, service, or practice. These standards are available for voluntary adoption by industry, states, or federal agencies. When incorporated into a U.S.-ratified treaty or adopted by a federal agency such as OSHA, these externally developed standards have the status of federal law. State officials long ago recognized that certain areas within their jurisdiction would benefit from a uniform approach. The National Conference of Commissioners on Uniform State Laws (Uniform Law Commissioners, or ULC), a nonprofit unincorporated association consisting of commissioners appointed by the states and supported by state appropriations, has worked for uniform laws since 1892. The ULC drafts uniform or model state acts in subject areas in which uniformity will produce significant benefits (such as facilitating commerce across state lines through the Uniform Commercial Code) or will avoid the disadvantages that arise from diversity of state laws (such as the Act on Reciprocal Enforcement of Support). While the ULC generally avoids taking up areas in which no legislative experience is available or that are controversial among the states, it does address emergent needs. For example, ULC proposed model laws on electronic signatures and health care privacy before there was federal legislation on these subjects. Once a uniform or model law is drafted, commissioners take it back to their states for consideration. Some (including the model electronic signatures law) have been adopted by most states. Others (such as the model health information law) have been adopted by relatively few. Implementation is left to each state. There is no federal role unless the Congress determines that federal legislation on the subject is needed. The same is true when states adopt standards developed through private standards development organizations. Uniform state laws or regulations are also developed by entities that address a particular regulatory area, such as insurance. The National Association of Insurance Commissioners (NAIC), an organization of insurance regulators from the states, is such an entity. It was founded to address the need to coordinate regulation of insurers that operate in a number of states. The NAIC develops model laws, regulations, and guidelines and reviews the activities of state insurance departments as part of its accreditation program. Model laws have addressed issues such as capital and surplus requirements and risk limitation. The NAIC’s model regulation that sets minimum standards for Medicare supplemental insurance policies (known as “medigap” policies) has been incorporated by reference into federal Medicare legislation and regulations. The Gramm-Leach-Bliley Act of 1999 (P.L. 106-102) involves NAIC in a different way. That act, which deals with the financial services industry, encourages states to adopt uniform laws and regulations governing licensure of individuals and entities authorized to sell insurance within the state and providing for cross-state reciprocity in licensure. The act directs NAIC to determine whether at least a majority of the states have achieved uniformity within 3 years of the legislation’s enactment. This target was met—by January 2002 the model act adopted by NAIC had been adopted by 39 states. If the target had not been met, the Act specified that a new nonprofit corporation, subject to NAIC supervision, be established to provide for state adoption of uniform insurance licensing laws. In the United States, private sector standards are the product of a decentralized, largely self-regulated network of more than 620 private, independent standards-development organizations and testing laboratories. A private nonprofit organization, the American National Standards Institute, establishes rules for developing standards on the basis of the consensus of the parties represented in the technical committees. The federal government directs agencies to use standards developed through this system except where inconsistent with law or otherwise impractical, and it encourages them to participate where appropriate in standards-setting organizations. The Occupational Safety and Health Act contains similar direction, and the OSHA Administration and other federal regulatory agencies have incorporated privately developed standards into their own agency regulations. Hazards addressed through federal-state regulation in the United States may also be of international concern and become the subject of international agreements. For example, criteria for classifying dangerous chemicals in transportation have been internationally harmonized through the United Nations’ Recommendations on the Transport of Dangerous Goods. DOT uses these criteria in developing U.S. HAZMAT regulations, which in turn are translated into state regulations for HAZMAT transportation as discussed previously in this report. Similarly, the FDA works with the Codex Alimentarius Commission, an international food standard-setting organization, thus helping ensure consistency of the Food Code (which states can adopt) with international standards. Thus, regulation through the mechanisms discussed above serves to align state as well as federal standards to those set internationally. Our review indicates that regulations or standards consistent with federal objectives can be formulated through a variety of mechanisms and implemented through various combinations of state and federal efforts. Each standard-setting mechanism offers different advantages and limitations, as do the various patterns of implementation. We discuss these advantages and limitations in terms of federal-state balance and in terms of operational challenges. Drawing on this discussion, we suggest how findings from our review could guide decisions regarding future programs. The standard-setting mechanisms we reviewed can be compared in terms of factors that the U.S. Advisory Commission on Intergovernmental Relations and other students of federalism have considered to be key in examining issues of federal-state balance. As the body of literature from these authors suggests, the factors apply on a case-by-case basis, taking into consideration the particular national objective concerned and circumstances relevant to its achievement. Key factors include: Uniformity: Does this mechanism provide uniform standards and nationwide coverage if essential to the national objective? Flexibility: Does it allow for flexibility where appropriate to that objective? Capacity: Does the mechanism assign responsibility appropriate to each level of government’s capacity to do the job at hand, taking into account breadth of jurisdiction, enforcement powers, resources, and location? Accountability: Can accountability to the federal government be incorporated into this mechanism if essential to achieving the national objective? Table 4 compares the five mechanisms in terms of these factors. This presentation reveals more clearly how the mechanisms differ in terms of the factors that a policymaker may consider critical to a particular objective. The table also highlights program design choices that can be made within each mechanism. For example, while flexibility is inherently limited under federal fixed standards, grant conditions can be written to give as much or as little flexibility as is appropriate to the federal objective concerned. Although direct implementation by a federal agency can be advantageous in certain situations, this approach presents its own set of challenges and limitations. In this study, we focused on the operational challenges that arise in shared federal-state implementation. First, shared implementation raises delicate issues of federal-state agency relations, oversight, and accountability. Legislators and agencies may have difficulty finding a level of oversight that is sufficient to protect against the harm that could come from inadequate state action while providing states the authority and flexibility needed to do the job effectively. Oversight tools such as the performance incentives and sanctions illustrated in our discussion of federal minimum standards programs can be designed with this purpose in mind. Another approach is the use of performance partnerships. Second, while overall resource adequacy may be an issue under any pattern of implementation, reliance on states to implement federal standards also raises questions about allocating costs between the federal government and the states. If federal funds are provided, the issue of fiscal substitution (use of federal funds to replace state funds) may also arise. Options for addressing these issues include the following: The state’s share can be preserved through the use of fiscal provisions such as maintenance of effort or matching requirements. The federal share may be provided through grants to participating states or by permitting states to retain payments generated through program operation and enforcement. Grant payments may be “up to” a specified percentage of state program cost, but actual payments depend on funds available and have sometimes been substantially less. If both levels of government participate in administering federal regulations, both levels contribute toward the cost. However, if the state does not participate, the federal agency administers the program at no cost to the state, which leads to a third challenge. The third challenge is that implementation arrangements that give the federal agency a back-up role can leave it vulnerable to sudden increases in responsibility and costs. This can happen when states drop their participation, as has happened in the OSHA and Meat and Poultry programs. It can also happen when states are judged by the agency to have failed to meet their responsibilities. The federal government may also bear the cost of enforcement temporarily when new provisions need enforcement before states are ready to assume this responsibility. We saw examples of each of these circumstances in the programs we reviewed. Fourth, shared implementation tends to produce variation in program implementation because states’ approaches may differ from each other and from the federal agency’s. For example, states may prefer to emphasize assistance while the federal agency relies more on enforcement actions to induce compliance. The variation may be appropriate and reflect a need for flexibility in light of differing conditions and to target limited resources to the problems that pose the greatest risk. If variation is not deemed appropriate—for example, if the national objective requires that enforcement actions as well as standards be uniform—federal requirements and oversight can be strengthened to provide uniformity. Finally, change can be cumbersome under federal-state implementation. Every time a federal statute or regulation changes, each state must make a corresponding change to its own statute or regulation before it can implement the new provision. This can lead to substantial delays, and states have observed that frequent change can become a burden. For example, the Association of Food and Drug Officials have noted the difficulty of amending regulations to keep up with changes in the Food Code every two years. Our review led us to conclude that setting up a regulatory program involves three stages of decision making and to develop questions to guide those decisions based on the observations summarized above. The three stages are identifying the national regulatory objective and reviewing pertinent background information, selecting a standard-setting mechanism appropriate to that objective, and designing appropriate federal and state roles in implementation. The last two stages are intertwined. For example, cooperative standard setting or reliance on states’ adoption of externally set standards usually means little or no federal role in implementation. However, other mechanisms leave considerable choice with respect to implementation arrangements. We illustrate this overall decision process, as guided by questions reflecting key factors, below. The national objective provides the starting point for selecting the mechanism for enlisting state efforts toward that objective. Assuming that the objective itself is consistent with the Constitution, factors to be considered include the nature of the hazard or practice to be regulated, for example, how widely it is distributed geographically and whether it is cross-state in nature, the risks it poses, and whether protection against these risks is needed immediately or within a period of years; existing federal statutory authority and capacity that could form the basis for setting and implementing standards; the extent to which state or other standards and enforcement are already in place and the resources and capacities available to support them; and the resources and capacities that are likely to be needed to formulate and implement or enforce new standards in this area. This background information can be expressed in the form of questions that will help in assessing the extent to which federal action is or is not needed and what form it might take (see figure 1). For example, the information may indicate that states are already handling the problem. The review of existing statutory authority will help policymakers determine whether new authority would be needed to establish federal standards. Finally, background information will provide a foundation for examining the objective in terms of federal-state balance factors and for proceeding to consider the choice of standard- setting mechanism. The discussion and figures that follow assume that policymakers have concluded that federal action is warranted and are contemplating designing a new program or rethinking an existing program. For the next stage of decision making, selection of a mechanism for pursuing the national objective, we depict the decision process as a series of questions or gates in order to make explicit what are often implicit considerations in decision making (see figure 2). We start with the question of whether—pursuant to the national objective in question— federal fixed or minimum standards would be acceptable in terms of federal-state balance. Our presentation does not imply that federal standards are the best choice but only that if they raise difficult issues consideration must move immediately to other options. If federal standards would likely be unacceptable, the next question (to the right on figure 2) is whether uniform regulations and nationwide coverage are essential to attaining the national objective. If not, policymakers may explore what could be achieved through state adoption of externally developed standards or by cooperating with states to set voluntary standards. This exploration should bear in mind that these mechanisms rely wholly on states for implementation and may not provide for central monitoring and uniform reporting. It is important to review the potential need for these practices and how they could be provided in the absence of direct federal oversight. If uniformity and nationwide coverage are essential, incorporating a federal standard into grant conditions could enlist the efforts of nearly all states. The next step is to consider whether federal minimum standards—which provide a baseline of protection but also allow variation from state to state above the minimum—or fixed federal standards would best achieve the national objective in question. For purposes of illustration (one could start with either option) our diagram first asks whether minimum standards would be appropriate. If so, and if that objective does not demand full national coverage, each of the alternative mechanisms would again be an option. However, if national coverage were essential, federal minimum standards would be the mechanism of choice. If federal standards are allowable and minimum standards are not appropriate—or if a common, unvarying nationwide standard is essential to attainment of the objective—fixed federal standards and the possible need to allow waivers should be considered. In considering the coverage needed for the standards to be effective, it is useful to think in terms of sector coverage as well as geographic scope. As the ERISA health plan example illustrates, uniformity will not be attained if standards cover only the federally regulated portion of a divided sector. It may happen that when all mechanisms have been considered, none seems truly appropriate. Such an outcome suggests that something has been missed along the way and that it would be useful to gather additional information and to revisit earlier steps in the decision process. The final step is to ensure that the mechanism chosen and the purpose are consistent with Congress’s authority to regulate under the Constitution. Fixed federal standards and minimum federal standards offer a choice between (1) direct federal implementation, (2) assumption of implementation responsibility by all states, and (3) assumption by some states, with direct federal administration in others. Whenever implementation by states is selected (under federal standards or through grants) there are choices to be made regarding the design of accountability, funding, and flexibility provisions. We now discuss factors to be considered in selecting and designing implementation options, as illustrated in figure 3. In this figure, we begin the decision process by asking whether the national objective and related considerations suggest a need for direct federal administration. Direct federal administration might be appropriate when centralized accountability and central direction are critical given the nature of the hazards that the standards address; states do not currently have, and the federal government has or can develop, the capacity needed to operate the program; uniformity of implementation will enhance or variance will undermine the effectiveness of the regulatory approach; and state and local involvement is not critical to achieving the objective. While we did not study in depth the option of direct federal administration in the regulatory area for this report, experience certainly suggests that this option has its own set of challenges and limitations. For example, the federal government may not have the personnel in place to carry out a program of national concern while state and local governments may have sufficient staff with the right kind of expertise to provide the needed services. Another challenge with this approach is overcoming any reluctance of state and local governments to accept the dictates of the federal government in a given policy area. It is also important to note that direct implementation and enforcement by a federal agency is not a self- executing decision and that there could be design and implementation challenges that might prove sufficiently problematic as to require rethinking the decision to use the direct federal administration option. If direct federal administration is not essential, the next question for a federal regulatory program is whether assumption of implementation responsibility by all states is desirable and is feasible in terms of their capacity. This question arises both under federal standards programs and under grants or other forms of support. The background information mentioned in figure 1 will likely be of assistance here, but additional inquiry may be needed to ascertain states’ capacity to implement the standards and their likely willingness to do so. If assignment to all states appears feasible, the next step is to consider more detailed questions of design in areas such as federal-state accountability arrangements, funding, and flexibility that arise under this option, and to take follow up action as needed. Note that while our figure shows only actions for clear “yes” or “no” answers, in reality both the answer and the appropriate follow up may fall in between or be a mix of the actions shown. If many states, but not all, are prepared to accept responsibility for implementation of federal standards, the first step might be to consider actions to increase capacity in states not currently qualified, so as to be able to enlist participation by all states in implementation at some future point. The other option would be to consider inducing currently willing and qualified states to assume responsibility, as in the OSHA program or the meat and poultry inspection program. To ensure national coverage, the program design will need to provide for direct administration by the federal agency in the remaining states. We suggest that the design provide for an orderly transition in case of state withdrawal from participation. Again, the next step will be to consider the various design questions. The funding question is of particular importance for any approach that relies largely on financial inducements for state participation. The remaining two regulatory mechanisms—cooperative programs and state adoption of externally set standards—rely solely on states to implement standards under their own authority. States are not accountable to the federal government and the federal agency does not oversee their activities, although it may perform monitoring functions such as collecting and reporting performance data. Because the federal role is so limited, the design questions we have listed for shared-implementation approaches are not directly applicable. However, the accountability and flexibility questions can be adapted to this context. For example, some purely state regulatory programs include provision for monitoring and oversight by a central body, such as the NAIC. The accountability questions could be applied to its functions. This study of a broad range of existing programs illustrates the rich variety of ways in which the federal government and the states can work toward achieving shared regulatory objectives. Each variation reflects circumstances and sensitive issues specific to the program concerned, and each program is unique in some way. But comparative analysis reveals both underlying features of program design and trade-offs between the various options available. Explicitly considering these features and trade- offs could help guide decisions about how to structure future federal-state regulatory programs. The decision framework we have developed displays the range of options available, identifies the major choice points in the decision process, and alerts policymakers to trade-offs and key follow-up actions associated with each choice. The framework is a neutral tool and does not favor any particular program design option or division of federal and state responsibilities. Rather, it is intended to help policymakers select a program design in keeping with the regulatory objective they seek to attain. As agreed with your office, we are sending copies of this report to appropriate congressional committees and other interested parties. We will also make copies available to others upon request. If you or your staff have any questions about this report, please contact me on (202) 512-9573 or Thomas James on (202) 512-2996. Individuals making key contributions to this report included Gail MacColl, Andrea Levine, Thomas Phan, and Mary Reintsma. | Both federal and state governments exercise regulatory authority in many of the same policy areas. In enacting new legislation in these shared areas, Congress must provide federal protections, guarantees, or benefits while preserving an appropriate balance between federal and state regulatory authority and responsibility. State efforts can be directed toward federal or nationally shared regulatory objectives through various arrangements, each of which reflects a way to define and issue regulations or standards and assign responsibility for their implementation or enforcement. Regulatory and standard-setting mechanisms for achieving nationwide coverage include (1) fixed federal standards that preempt all state regulatory action, (2) minimum federal standards that preempt less stringent state laws but permit states to establish more stringent standards, (3) the inclusion of federal regulatory provisions in grants or other forms of assistance, (4) cooperative programs in which voluntary national standards are formulated by federal and state officials working together, and (5) widespread state adoption of voluntary standards formulated by quasi-official entities. The first two of these mechanisms involve preemption; the other three represent alternative approaches. Each represents a different combination of federal and state regulatory authority. The mechanisms also offer different options to implementation or enforcement. Furthermore, each standard-setting mechanism offers advantages and disadvantages that reflect the key considerations of federal-state balance in the context of a given national regulatory objective. Shared implementation involves several operational challenges, such as finding the appropriate level of federal oversight, allocating costs between the federal government and the states, potentially increasing the vulnerability of federal agencies to sudden increases in responsibilities and costs, handling variations in implementation from state to state, and adjusting to the new federal-state balance. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Mortgage servicers are the entities that manage payment collections and other activities associated with home loans. Mortgage servicers can be large mortgage finance companies, commercial banks, or nondepository institutions. Servicing duties can involve sending borrowers monthly account statements, answering customer-service inquiries, collecting monthly mortgage payments, and maintaining escrow accounts for property taxes and insurance. In the event that a borrower becomes delinquent on loan payments, servicers also initiate and conduct foreclosures. Errors, misrepresentations, and deficiencies in foreclosure processing can result in a number of harms to borrowers ranging from inappropriate fees to untimely or wrongful foreclosure. Several federal regulators share responsibility for regulating the banking industry in relation to the origination and servicing of mortgage loans. OCC has authority to oversee nationally chartered banks and federal savings associations (including mortgage banking activities). The Federal Reserve oversees insured state-chartered banks that are members of the Federal Reserve System, bank and thrift holding companies, and entities that may be owned by federally regulated depository institution holding companies but are not federally insured depository institutions. The Federal Deposit Insurance Corporation (FDIC) oversees insured state-chartered banks that are not members of the Federal Reserve System and state-chartered savings associations. Finally, CFPB has the authority to regulate mortgage servicers with respect to federal consumer financial law.into a memorandum of understanding with prudential regulators— specifically the Federal Reserve, FDIC, OCC, and the National Credit Union Administration—that governs their responsibilities to share information and coordinate supervisory activities so as to effectively and efficiently carry out their responsibilities, decrease the risk of conflicting supervisory directives, and increase the potential for alignment of related supervisory activities. OCC designates each national bank as a large, mid-size, or community bank. The designation is based on the institution’s asset size and whether other special factors affect its risk profile, such as the extent of asset management operations, international activities, or high-risk products and services. Large banks are the largest and most complex national banks and are designated by the Senior Deputy Comptroller for Large Bank Supervision. Mid-size banks may be designated as large banks at the discretion of the Deputy Comptroller for Midsize and Credit Card Banks. responsible Federal Reserve Bank, which in turn assigns a central point of contact to each servicer. The contact leads an examination team with responsibility for continually monitoring activities, conducting discovery examinations designed to improve understanding of a particular business activity or control process, and testing whether a control process is appropriately designed and achieving its objectives. In September 2010, allegations surfaced that several servicers’ documents in support of judicial foreclosure may have been inappropriately signed or notarized. In response to this and other servicing issues, federal banking regulators—OCC, the Federal Reserve, the Office of Thrift Supervision, and FDIC—conducted a coordinated on- site review of 14 mortgage servicers to evaluate the adequacy of servicers’ controls over foreclosure processes and to assess servicers’ policies and procedures for compliance with applicable federal and state laws. Through this coordinated review, regulators found critical weaknesses in servicers’ foreclosure governance processes; foreclosure documentation preparation processes; and oversight and monitoring of third-party vendors, including foreclosure attorneys. On the basis of their findings from the coordinated review, OCC, the Office of Thrift Supervision, and the Federal Reserve issued in April 2011 formal consent orders against 14 servicers under their supervision (see fig. 1). Subsequently, the Federal Reserve issued similar consent orders against two additional servicers. These consent orders were intended to ensure safe and sound mortgage-servicing and foreclosure-processing activities and help address weaknesses with mortgage servicing identified during the reviews. To comply with the consent orders, each of the 16 servicers is required to, among other things, enhance its vendor management, training programs and processes, and compliance with all applicable federal and state laws, rules, regulations, court orders, and servicing guidelines. In addition, as a result of the consent orders, the Federal Reserve issued civil money penalties against some of the servicers and provided that the penalty amounts could be remitted by federal payments made and borrower assistance provided under the National Mortgage Settlement or by providing funding to housing counseling organizations.OCC also considered civil money penalties against the servicers it regulates, and for four servicers that were also party to the National Mortgage Settlement, OCC reached an agreement that civil money penalties would be assessed if the servicer did not satisfy the requirements of the formal consent orders or their respective obligations under the National Mortgage Settlement. The consent orders also required each servicer to retain an independent consultant to review certain foreclosure actions on primary residences from January 1, 2009, to December 31, 2010, to identify borrowers who suffered financial injury as a result of errors, misrepresentations, or other deficiencies in foreclosure actions, and to recommend remediation for borrowers, as appropriate. In general, the consent orders identified seven areas for consultants to review: 1. whether the servicer had proper documentation of ownership of the 2. whether the foreclosure was in accordance with applicable state and 3. whether a foreclosure sale occurred while a loan modification was 4. whether nonjudicial foreclosures followed the terms of the loan and state law requirements;5. whether fees charged to the borrower were permissible, reasonable, 6. whether loss-mitigation activities were handled in accordance with program requirements and policies; and 7. whether any errors, misrepresentations, or other deficiencies resulted in financial injury to the borrower. To review these areas, consultants generally segmented their file review activities to test for each area of potential error separately. As a result, a borrower’s loan file might have undergone multiple reviews for different potential errors before the results of each of the review segments were compiled and the file review was considered complete. Loans were identified for review through a process by which eligible borrowers could request a review of their particular circumstances (referred to as the request-for-review process) and through a review of categories of files considered at high risk for errors (referred to as the look-back review). Regulators required servicers to establish an outreach process for eligible borrowers who believed they might have been harmed due to errors in the foreclosure process to request a review of their particular circumstances. Consultants were expected to review all of the loans received through the request-for-review process. For the look-back review, regulators required consultants to review 100 percent of all files in three categories—borrowers in bankruptcy in which a completed foreclosure took place, loans potentially subject to the protections provided by the Servicemembers Civil Relief Act (SCRA), and agency- referred foreclosure cases—that were identified as at high risk for servicing or foreclosure-related errors during the regulators’ 2010 coordinated reviews. Consultants for Federal Reserve-regulated servicers were also required to review 100 percent of files in two other categories determined to be high risk—borrowers with pending modification requests and borrowers current on a trial or permanent modification. In addition, as each servicer had a unique borrower population and servicing systems, consultants, with examination teams’ input, were expected to identify various high-risk loan categories appropriate to their servicer—such as loans in certain states or loans associated with certain foreclosure law firms—that could be associated with a higher likelihood of servicing or foreclosure-related errors and review a sample of those loans. Beginning in January 2013, OCC and the Federal Reserve announced that they had reached agreements with 15 of the 16 servicing companies to terminate the foreclosure reviews and replace the reviews with a payment agreement (as previously shown in fig. 1). Under these agreements, servicers agreed to provide compensation totaling approximately $10 billion, including $4 billion in cash payments to eligible borrowers and $6 billion in foreclosure prevention actions. These amounts were generally divided among the 15 participating servicers according to the number of borrowers who were eligible for the foreclosure review at the time the amended orders were negotiated such that the total per-servicer amount ranged from $16 million to $2.9 billion (see table 1). For the majority of servicers, the amended consent orders ended an approximately 20-month file review process. Although consultants were at various stages of completing the reviews when the work was discontinued, the amended consent orders underlined that regulators retained the right to obtain and access all material, records, or information generated by the servicer or the consultant in connection with the file review process. The amended consent orders did not affect the other aspects of the original consent orders—such as required improvements to borrower communication, operation of management information systems, and management of third-party vendors for foreclosure-related functions—and work to oversee servicer compliance with these other aspects continues. According to regulatory staff and documents, the estimated time it would take for borrowers to receive remediation and mounting costs of completing the file reviews motivated the decision to amend the consent orders. As of December 2012, OCC staff estimated that remediation payments to borrowers would not start for many months and that completing the file review process could take, at a minimum, an additional 1 to 2 years, based on the number of files still to be reviewed and the extent of the work to be completed. The mounting costs of the file reviews also motivated the decision to terminate the file reviews for most servicers. As of August 2012, the collective costs for the consultants had reached $1.7 billion, according to OCC’s decision memorandum. Based on the results of the reviews conducted by consultants through December 2012, regulators estimated that borrower remediation amounts would likely be small while the consultant costs to complete the reviews would be significant. As a result, OCC and Federal Reserve staff determined that completing the reviews to determine precisely which borrowers had compensable errors due to harm would have resulted in long delays in providing remediation payments to harmed borrowers. With the adoption of the amended consent orders, regulators and servicers moved away from identifying the types and extent of harm an individual borrower may have experienced and focused instead on issuing payments to all eligible borrowers based on identifiable characteristics. To determine the cash payment amount to be provided to each borrower, the majority of participating servicers categorized borrowers according to specific criteria. Fourteen of the servicers that participated in the amended consent order process, covering approximately 95 percent of the population of 4.4 million borrowers that were eligible for the foreclosure review process under the original consent orders, adopted this approach (see table 2). To categorize borrowers, regulators provided each servicer with a cash payment framework that included 11 categories of potential harms—including violation of SCRA protections and foreclosure on borrowers in bankruptcy—and generally ordered the categories by severity of potential harm. For each of the 11 categories in the cash payment framework, regulators identified specific borrower and loan characteristics that servicers then used to place all eligible borrowers into categories such that a borrower would be placed in the highest category for which he or she had the required characteristics. Regulators used the results of this categorization process as the basis for determining the payment amounts for each category. The payment amounts for all eligible borrowers for those 14 servicers ranged from several hundred dollars for a servicer that did not engage the borrower in a loan modification to $125,000, plus equity and interest, for a servicer that foreclosed on a borrower who was eligible for SCRA protection. One other servicer signed an amended consent order to terminate the file review process and provide cash payments to borrowers. In contrast to the other servicers that signed amended consent orders, this servicer had completed its initial file review activities and OCC used the preliminary file review results as the basis for determining payments to all eligible borrowers. The amended consent orders also required all 15 servicers to undertake a specified dollar amount of foreclosure prevention actions and submit those actions for credit based on criteria established by regulators. For 13 of the servicers, these actions are to occur between January 2013 and January 2015. The amended orders provided two methods for servicers to receive credit for foreclosure prevention actions. First, servicers could conduct loss-mitigation activities for individual borrowers, by providing loan modifications or short sales, among other actions. Regulators also specified that the actions taken under this method could not be used to satisfy other similar requirements, such as the foreclosure prevention requirement of the National Mortgage Settlement (discussed later). Second, servicers could satisfy their obligation by making cash payments to approved housing counseling agencies, among other actions. One servicer, OneWest Bank, did not elect to amend its consent order and terminate the file review process. The consultant for this servicer continues file review activities for a portion of the eligible population of 192,000 borrowers, as planned. According to OCC, in 2014, the servicer will provide remediation to borrowers based on findings of actual harm. In addition to the consent orders issued by OCC, the Office of Thrift Supervision, and the Federal Reserve, mortgage servicers have been subject to other actions designed to improve the provision of mortgage servicing by setting servicing standards. In February 2012, the Departments of Justice, Treasury, and Housing and Urban Development, along with 49 state Attorneys General, reached a settlement with the country’s five largest mortgage servicers. Under the settlement, the servicers will provide approximately $25 billion in relief to distressed borrowers and the servicers agreed to a set of mortgage servicing standards. This settlement, known as the National Mortgage Settlement, established nationwide servicing reforms for the participating servicers, including establishing a single point of contact for borrowers, standards for communication with borrowers, and expectations for fee amounts and the execution of foreclosure documentation. The settlement also established an independent monitor to oversee the servicers’ execution of the agreement, including their adherence to the mortgage servicing standards. CFPB also established new mortgage servicing rules that took effect in January 2014. Among other things, these rules established requirements for servicers’ crediting of mortgage payments, resolution of borrower complaints, and actions servicers are required to take when borrowers are late in their mortgage payments. In addition to the National Mortgage Settlement, other recent settlements have required servicers to provide foreclosure relief to borrowers as a component of the agreement. In November 2013, the Department of Justice along with state Attorneys General for four states announced a settlement with JPMorgan Chase to provide $4 billion in foreclosure relief, among other actions, to remediate harms allegedly resulting from unlawful conduct. The settlement identified specific actions for which JPMorgan Chase would receive credit towards its obligation, including certain types of loan modification actions, lending to low- to moderate- income borrowers and borrowers in disaster areas, and activities to support antiblight programs. Similarly, in December 2013, CFPB and 49 state Attorneys General and the District of Columbia announced a settlement with Ocwen Financial Corporation to provide $2 billion in relief to homeowners at risk of foreclosure by reducing the principal on their loans. Both settlements also assign an independent monitor to oversee the execution of the settlements, and the settlement with Ocwen requires the servicer to comply with the standards for servicing loans established in the National Mortgage Settlement. Regulators considered factors such as projected costs and potential remediation amounts associated with the file reviews to negotiate the $3.9 billion total cash payment under the amended consent orders. However, because the reviews were incomplete, these data were limited. According to Federal Reserve staff, OCC led the data analysis to inform negotiations, and the Federal Reserve relied on aspects of this work. Despite the uncertainty regarding the remaining costs and actual financial harm experienced by borrowers, regulators did not test the major assumptions used to inform negotiations. According to our prior work, testing major assumptions provides decision makers a range of best- and worst-case scenarios to consider and provides information to assess whether an estimate is reasonable. We compared the final negotiated cash payment amount to estimates we obtained by varying the key assumptions used in regulators’ analysis. Our analysis found that the final negotiated amount was generally within the range of different results based on alternative assumptions. Regulators established goals related to timeliness, the cash payment amounts, and the consistency of the treatment of borrowers and the distribution of payments. Regulators met their timeliness and amount goals and took steps to promote a consistent process, including providing guidance to examination teams and servicers. The cash payment agreement obligations under the amended consent orders were achieved through negotiations between regulators and participating servicers. According to OCC, staff engaged with six servicers in November 2012 to discuss a cash payment agreement. As previously discussed, the estimated time it would take for borrowers to receive remediation and mounting costs of completing the reviews motivated the cash payment agreement under the amended consent orders. Following initial discussions with these six servicers, regulators engaged in similar discussions with an additional eight servicers subject to the foreclosure review requirement, according to regulatory staff. The total negotiated cash payment amount for all 15 servicers that ultimately participated in amended consent orders was approximately $3.9 billion. Generally, each servicer’s share of the cash payment amount was determined based on its proportional share of the 4.4 million borrowers who were eligible for the foreclosure review. Regulators considered factors such as projected costs to complete file reviews and potential remediation amounts associated with the file reviews to inform negotiations with servicers. According to Federal Reserve staff, OCC led negotiations with servicers and the initial analysis of estimates that informed these negotiations. According to Federal Reserve staff, they participated in negotiations and relied on certain elements of OCC’s analysis to inform the Federal Reserve’s decisions regarding a payment agreement for the institutions they oversee. To inform negotiations with servicers, OCC developed two estimates of servicers’ costs: an estimate of the projected cost to complete the reviews and an estimate for the potential remediation payout to borrowers. Specifically, OCC staff said they used the cost estimate as a means of estimating what servicers might be willing to pay and the potential remediation payout as an early attempt to estimate potential harm and understand how funds would be distributed among borrowers. The final amount of $3.9 billion was negotiated between regulators and servicers and was higher than the estimates regulators used to inform negotiations. Projected cost to complete the reviews. According to regulatory staff and documents, OCC and the Federal Reserve relied on cost projections from consultants which estimated that the remaining expected fees for consultants to complete the reviews would be at least $2 billion. In November 2012, consultants reported cost projections based on time frames ranging from as short as 4 months for one servicer to as long as 13 months for other servicers—that is, 4 to 13 months beyond November 2012—to complete reviews. Regulatory staff told us they also considered the amounts servicers had reserved to pay for potential remediation. Specifically, OCC included an estimate of the amount servicers had reserved to pay for potential remediation ($859 million), bringing the total estimated cost to complete the reviews had they not been terminated to approximately $2.9 billion ($2 billion to complete the reviews plus the $859 in remediation reserves). According to regulatory staff and documents, the Federal Reserve relied on projected costs and remediation reserves provided by OCC to inform their decisions during negotiations. Potential remediation payout to borrowers. Using the aggregate financial harm error rate—that is, the financial harm error rate for all completed files among all servicers—of 6.5 percent in December 2012, OCC estimated the potential remediation payout to borrowers from the reviews would be $1.2 billion, according to OCC documents. In this analysis, regulators used amounts listed in the foreclosure review remediation framework and added an additional $1,000 per borrower for borrowers who submitted a request-for-review and were in the process of foreclosure. For borrowers who submitted a request-for-review and had a completed foreclosure, OCC added an additional $2,000 per borrower.estimated the distribution of borrowers among the payment categories by extrapolating the results of one servicer’s initial categorization to all servicers. Specifically, they used one servicer’s preliminary distribution of borrowers to estimate the proportion of borrowers in each category. According to OCC staff and documents, they then applied these proportions to the borrower populations for other servicers and applied the 6.5 percent financial harm error rate to each category. According to OCC staff, they used the distribution of one servicer’s population because it provided retail servicing nationwide. OCC staff stated that they analyzed the distribution of borrowers for two additional servicers and reached similar results. Federal Reserve staff told us they did not rely on OCC’s financial harm error rate analysis to inform their decisions during negotiations; rather, as stated previously, they relied on cost projections and remediation reserves to inform their decisions during negotiations. In addition, OCC staff told us they The data that were available to regulators to inform negotiations for the cash payment amount were limited. Because the reviews were incomplete in November 2012 when negotiations began, data were limited due to uncertainty about the (1) costs associated with completing the reviews and (2) error rate for the entire population of 4.4 million borrowers eligible for review. First, given the incomplete state of the reviews in November 2012 when negotiations began, regulators had limited information about costs associated with completing the reviews. For example, cost projections available to regulators prior to the negotiations did not account for additional requests-for-review submitted in December 2012. The period for eligible borrowers to submit requests- for-review did not expire until December 31, 2012—after negotiations between regulators and servicers began. Between November 29, 2012, and December 27, 2012, the number of requests-for-review increased by more than 135,000 requests (44 percent). In addition, for most consultants, the cost projections did not account for the planned second phase of reviews, known as deeper dives, in which consultants would have conducted additional reviews based on errors identified in the first phase of reviews. Among the servicers that participated in the payment agreements, all consultants we spoke with anticipated that they would conduct deeper dives. In its decision memorandum for the amended consent orders, OCC estimated an additional 1 to 2 years to complete the reviews. OCC staff stated based on the scope and complexity of the remaining reviews, they believed the reviews would have taken longer than consultants projected in November 2012. Second, the incomplete nature of the reviews in December 2012 limited the extent to which regulators could estimate the financial harm error rate and potential remediation. The remediation reserves established by some servicers were based on reviews that had been conducted by consultants thus far. Similarly, the extent to which OCC could use the preliminary error rate of 6.5 percent for the completed reviews to reliably estimate the prevalence of harm in the population and potential remediation was limited. According to data provided to regulators, third-party consultants of servicers that had agreed to the payment agreement in January 2013 had completed final reviews for approximately 14 percent of the files slated for review, and none of the consultants had completed their sampled file reviews, making it difficult for OCC to reliably estimate the prevalence of harm or potential remediation payout for the entire 4.4 million borrowers eligible for the reviews. In addition, reports provided to regulators by consultants of the servicers who agreed to the payment agreement in January 2013 showed variation in progress and financial harm error rates across servicers (see table 3). For example, servicer “K” reported over 90 percent of the sampled file reviews complete for foreclosures in progress and foreclosures complete, with error rates of about 26.7 percent and 15.6 percent, respectively. In contrast, servicer “A” reported it had not completed any final reviews. Further, the segments and types of reviews that were completed varied among consultants. For example, one consultant told us they prioritized sampled files for review over requested file reviews, while another consultant told us they focused on completing requested reviews. Another consultant stated they prioritized requested reviews and pending foreclosures. The final negotiated cash payment amount of $3.9 billion exceeded the two separate cost estimates of $2.9 billion and $1.2 billion that OCC generated to inform negotiations. However, OCC performed only limited analyses. For example, OCC did not vary key assumptions about costs and error rates used in its estimates, which would have been appropriate given the limitations of the available data. The Federal Reserve did not conduct any additional analyses to inform negotiations, but relied, in part, on data and analysis provided by OCC pertaining to projected costs and remediation reserves to inform its decisions regarding the payment agreement. As part of our review, we conducted a sensitivity analysis to test changes to major assumptions associated with the data regulators used to inform negotiations. Specifically, we tested assumptions related to projected costs, error rate, and borrower categorization. Further, to assess the reasonableness of the final negotiated amount, we used the results of our sensitivity analysis to compare the final negotiated cash payment amount to the amounts calculated when we varied key assumptions. We found that the final negotiated amount of $3.9 billion was generally more than amounts suggested under various scenarios we analyzed. (See app. I for more detail on this analysis.) Projected costs. In its analysis using consultants’ reported projected costs, OCC estimated that the cost to complete the reviews would have been $2.9 billion. However, as we noted earlier, cost projections were limited and did not take into account the additional requests for review submitted by borrowers in December 2012 or the time associated with anticipated deeper dives. We calculated monthly costs using consultants’ reports that were available from September 2012 through December 2012 and estimated the projected total cost to complete reviews under several alternative scenarios. Our analysis showed that the total costs could have been either higher or lower than the estimates OCC used in its analysis, depending on how long the reviews would have taken if they had continued. For example, we estimated that if the reviews had taken an additional 13 months to complete (the longest projected time reported by consultants in November 2012), the cost would have been nearly $2.5 billion—about $460 million (23 percent), more than the regulators’ estimate of $2 billion. Conversely, if the reviews had taken less time to complete than the consultants projected, regulators’ analyses may have overestimated costs. We then added OCC’s remediation reserve estimate of $859 million to our cost estimates. Including the remediation reserves, our estimate for projected costs based on 13 additional months of review was $3.3 billion (see fig. 2). Both our estimated amount at 13 months and OCC’s estimation of $2.9 billion are less than the actual final negotiated amount of $3.9 billion. Because OCC stated the reviews could take up to an additional 2 years, we included an additional 24 months in our analyses, which resulted in an estimate of $4.6 billion. OCC staff stated that, based on the experience of the servicer that continued with the reviews and had a relatively small number of borrowers eligible for review, an additional 2 years or more to complete the reviews was a likely scenario for other servicers had they not participated in the amended consent orders. Financial harm error rate. As an alternative measure, OCC estimated remediation payouts based on a preliminary financial harm error rate of 6.5 percent for file reviews completed as of December 2012 across all servicers. On the basis of that analysis, OCC estimated that remediation payouts from the file reviews could be $1.2 billion. However, as discussed above, the progress and findings of errors and financial harm among servicers varied significantly. We analyzed the projected remediation payments using the lowest, median, and highest preliminary error rates for the 13 servicers that participated in the payment agreement in January 2013. Our analysis generated a range of estimated remediation payouts between 71 percent below and almost 206 percent above the amount generated by OCC’s analysis using the average error rate of 6.5 percent (see fig. 3). However, the final, negotiated cash payment of $3.9 billion was higher than the payment of $3.7 billion that we calculated at the highest reported servicer error rate. Borrower categorization. As stated previously, OCC estimated the distribution of borrowers among the payment categories in its error rate analysis by extrapolating the results of one servicer’s initial borrower categorization to all servicers. OCC and the Federal Reserve told us that each servicer’s borrower population was unique. As such, different servicers could have different borrower distributions among the payment categories. We analyzed the distribution of borrowers for the other five servicers involved in initial amended consent order negotiations based on preliminary data servicers provided to regulators. Our analysis showed that the final, negotiated cash payment of $3.9 billion was higher than the estimates that would have resulted from using any of the other five servicers’ borrower distributions (see fig. 4). Prior to agreeing on a final cash payment amount, both the Federal Reserve and OCC conducted additional analyses to corroborate that the negotiated cash payment amount was acceptable. For example, the Federal Reserve estimated payment amounts to borrowers by category under the tentative agreement to confirm that the negotiated amount would not result in trivial payments to borrowers. This analysis showed that a $3.8 billion total cash payment would provide payments to borrowers in each category ranging from several hundred dollars up to $125,000. Therefore, after considering these cost estimates as well as the timelines for project completion, the Federal Reserve determined that the negotiated amount was acceptable because it exceeded the combined expected fees and remediation reserve estimates of completing the reviews and would allow for nontrivial payment amounts to borrowers in each category. OCC staff stated they conducted similar, informal analyses of the tentative settlement agreement. Specifically, OCC staff stated they considered the error rate for proposed cash payment amounts during negotiation. For example, staff estimated that the actual error rate from completed reviews would have had to exceed nearly 26 percent before remediation payments under the reviews would exceed the negotiated cash payment amount. Therefore, according to this analysis, OCC determined that the negotiated amount was acceptable. Staff also stated they believed the negotiated amount would be more than sufficient to cover the total amount servicers would have paid to harmed borrowers under the foreclosure review. Regulators stated that both the limited nature of the information available during the negotiation and the process for determining the amounts paid by servicers under the amended consent orders were not typical. According to Federal Reserve staff, in a typical process, they would conduct investigations to determine actual harm and perform analyses to determine compensation amounts. For example, for a recent enforcement order against a subprime mortgage lender, which involved a much smaller population of potentially harmed borrowers than the foreclosure review, the Federal Reserve required the servicer to analyze individual files to determine the specific amount of harm. OCC staff stated that because the negotiated payment agreement involved the discontinuation of the reviews required by the original consent orders, they did not have data that would otherwise typically have been available. Both OCC and Federal Reserve staff told us there are no prior enforcement actions that are comparable to the payment agreement under the amended consent orders. OCC staff stated that the amended consent orders are atypical in terms of the number of borrowers eligible for reviews (over 4 million), the number of projected file reviews (over 739,000), and the extensive nature of each review. In addition, regulators stated that, given the limited progress of the file reviews, they did not believe extensive analysis was possible. While regulators did have more analytical methods available to them, we recognize that they had limited data available. Generally, regulators set three goals for the process of categorizing and distributing cash payments to borrowers: 1. provide compensation to a large number of borrowers before 2014, 2. provide cash payments to borrowers of between several hundred dollars and $125,000, and 3. reduce the possibility of inconsistent treatment of borrowers among servicers, when compared with the file review results. Regulators took steps to meet their goal for the timeliness of distribution of cash payments to a large number of borrowers. As of December 2013, checks had been distributed to approximately 4 million borrowers covered by the 13 servicers that were part of the January 2013 amended consent order announcements. As shown in figure 5, California and Florida were the states with the largest number of checks issued as well as the largest Specifically, borrowers in California and total amount paid to borrowers.Florida received about 32 percent of the total issued checks (1.3 million checks collectively worth approximately $1.2 billion). In addition, borrowers in seven states (Arizona, Georgia, Illinois, Michigan, Nevada, Ohio, and Texas) received checks worth a total of between $100 million and $200 million per state. Although the checks were sent to the mailing address of the borrower rather than the property address of the affected property, according to our analysis of Mortgage Bankers Association data, these states correspond to some of the states with the highest foreclosure inventories in 2009 and 2010. In comparison, borrowers in five states and the District of Columbia received checks worth a total of less than $5 million per state (Alaska, District of Columbia, North Dakota, South Dakota, Vermont, and Wyoming). To facilitate meeting the goal of a timely borrower categorization process, regulators defined specific loan and borrower characteristics—such as extent of delinquency, forbearance or repayment plan start date, foreclosure sale date, or bankruptcy filing date—for each cash payment category in advance. They expected servicers to use these characteristics to categorize borrowers based on the data in servicers’ computer systems—review of files by hand to make a judgment about a borrower’s category was generally not permitted.expected servicers to conduct an internal review of their categorization results—for example, several servicers engaged their internal audit department, which are separate from the servicers’ mortgage servicing operations, to conduct a preliminary validation of the results to identify problems or weaknesses with categorization activities. According to several examination staff that we spoke with, they met regularly with the staff responsible for internal reviews to discuss their approach and review their results. This step contributed to a more timely verification process by the examination teams as they were already familiar with the servicer’s internal review procedures and results. Finally, regulators asked servicers to select one third-party payment administrator to facilitate issuance of checks. According to OCC staff, regulators worked closely with this payment administrator concurrently with the categorization process to define the work processes for check distribution to help facilitate a timely distribution of checks to borrowers once the categorization process was complete. The cash payment categorization process was largely completed by April 2013 for the 13 servicers, and the payment administrator began issuing checks to each of the approximately 4.2 million eligible borrowers serviced by the 13 servicers that were part of the January 2013 amended consent order announcements. As figure 6 shows, the payment administrator issued approximately 89 percent of checks to borrowers in April 2013 with the majority of the remaining checks issued by July 2013. As of early January 2014, approximately 193 payments remained to be issued. The payment administrator had not issued these checks because of borrower-specific challenges, including problems with the borrower’s taxpayer identification number or the need to issue multiple checks for the same loan. The payment administrator issued approximately 96,000 checks for amounts that were less than the borrower should have received. Supplementary checks worth about $45 million were issued to the affected borrowers in May 2013. As of the beginning of January 2014, approximately 81 percent of the issued checks had been cashed. According to OCC staff, to help promote check cashing, regulators instructed the payment administrator to conduct additional research on a borrower’s address and re-issue checks to borrowers whose initial checks had expired and had not been cashed to try and increase the check- cashing rate. Under the cash payment process, borrowers generally received cash payments of between $300 and $125,000, in line with regulators’ goal of providing those amounts to borrowers. In general, the amounts paid to borrowers in the same category varied depending on whether the borrower had submitted a request-for-review—those borrowers received a higher payment amount than other borrowers—and whether the foreclosure was in process, had been rescinded, or was complete as of December 31, 2011. In addition, those borrowers serviced by two servicers that signed the original consent orders after April 2011 and therefore had not participated in the request-for-review process were generally paid at the same level or at a higher level—24 percent to 30 percent more—than a borrower who did not submit a request-for-review. As seen in figure 7, the largest number of borrowers (1.2 million borrowers, or 29 percent of the eligible population) were placed in the category for approved modification requests, which provided payments of between $300 and $500, depending on whether the borrower was considered to have submitted a request-for-review. About 1,200 borrowers were paid at the maximum rate of $125,000, including approximately 1,100 SCRA-eligible borrowers. Approximately 11 percent (439,000 borrowers) were paid an additional amount designated for borrowers who had submitted requests-for-review. Although regulators met their cash payment amount goal, they recognized that some borrowers might have received more or less through the foreclosure review process. According to regulators, as part of their process to determine the cash payment amounts to be paid to borrowers in each category, they considered the amount that borrowers would have been paid for errors in that category under the file review process, among other considerations. Under the cash payment framework, borrowers in the highest paid categories—SCRA-eligible borrowers and borrowers foreclosed upon who were not in default— received the same amounts as they would have under the file review process. For the other categories, the final cash payment amounts were generally less than the amounts that would have been paid for an error in that category under the file review process for borrowers who did not submit a request-for-review. According to regulators, they decided to pay higher amounts to borrowers who submitted requests-for-review— generally double the amounts paid to borrowers who did not submit requests-for-review—because they felt that those borrowers had an expectation of receiving a file review and should be compensated for that expectation. According to regulators, in adopting the cash payment process, they recognized that some borrowers would fare better or worse that they might have under the file review process. For example, some borrowers who might not have received remediation under the file review process, either because a file review did not identify harm or the file was not reviewed, would receive a cash payment. However regulators said the converse was also true, that is, borrowers who through the file review could have been found to have been harmed and therefore eligible for remediation could potentially receive a lower amount through the cash payment process. OCC and Federal Reserve staff also stated that under the amended consent orders, borrowers were not required to waive or release any rights or claims against the servicer to receive a cash payment. According to regulators, in recognition of challenges in achieving consistent results among servicers during the file review process, they took steps to promote a consistent approach to the cash payment categorization process—one of their goals—such that similarly situated borrowers would have similar results. For example, regulators held weekly meetings with OCC and Federal Reserve examination team staff as well as with servicers to discuss the categorization process. In addition, they provided guidance to examination teams and servicers for the categorization process, including examination teams’ oversight activities. According to examination teams, the guidance provided was timely, and given the limited time to complete the categorization process, they generally worked closely with the servicer to ensure any resulting changes were incorporated. OCC headquarters staff also conducted on- site visits to each servicer and examination team to review the categorization process and activities. According to OCC staff, these on- site visits allowed for a comparison of servicers’ categorization processes and the oversight processes used by the examination teams to help ensure these activities were done according to the guidance and as a result would be largely consistent. Similarly, the Federal Reserve examination teams and Federal Reserve Board staff met in person to discuss the categorization process and oversight activities as part of their efforts to promote consistent results. Finally, according to a few servicers we spoke with, to promote consistent results some servicers met early in the process, with regulators’ input, to discuss the regulators’ categorization guidance and mentored other servicers as they conducted their initial categorization activities to help ensure there was a shared interpretation of the guidance among servicers. However, there were some differences in the categorization results for borrowers among servicers as a result of flexibilities in the categorization process, as well as limitations with some servicers’ data systems. For example, servicers were given the option of retaining the third-party consultant hired to work on the foreclosure reviews to complete file reviews for borrowers who were categorized into the first two categories—SCRA-eligible borrowers and borrowers not in default at the time of foreclosure—rather than relying on the loan and borrower characteristics regulators’ specified for those categories. Based on the file review results, servicers were required to provide remediation to borrowers whom the file reviews determined had been harmed and re- categorize the remaining borrowers into the next highest payment category for which they qualified according to other loan and borrower characteristics. Based on our review of regulators’ documents, 12 of the 13 servicers used this option and directed consultants to complete file reviews for borrowers who were placed in some of these categories. According to OCC staff and one servicer we spoke with, some consultants had already completed or were near completion of the file reviews for SCRA-eligible borrowers. Similarly, missing or unreliable data in servicers’ systems resulted in some servicers being unable to categorize borrowers according to the cash payment framework criteria and instead placing borrowers in the highest category for which they had data. According to our review of examination teams’ conclusion memorandums and interviews with examination teams, at least 5 of the 13 servicers were unable to place some borrowers into the most appropriate category of the framework because servicers’ systems did not have the data necessary to categorize borrowers according to the loan and borrower characteristics provided by regulators. For the majority of these servicers, the percentage of affected borrowers was relatively small. For example, in one case data limitations affected roughly 4 percent of borrowers at that servicer, whereas in another case, they impacted approximately 8 percent of that servicer’s borrowers. However, for one servicer, data limitations were extensive enough that regulators required the servicer to stop the categorization process for approximately 74 percent of eligible borrowers and categorize those borrowers into higher categories than their characteristics might have indicated if data had been available in the servicer’s system. According to regulators, they mitigated the impact of these limitations on individual borrowers by instructing servicers to place borrowers in the highest possible category from which they could not be excluded due to missing or unreliable data. Figure 8 illustrates an example of how the same borrower might have had different results depending on the servicer. Placing borrowers in higher categories when data were unavailable potentially had a distributional impact on other borrowers. Where there is a set sum of money, as in this case, making changes by placing more borrowers than anticipated in higher categories could result in either (1) lower payment amounts per borrowers in those categories or (2) lower- than-anticipated amounts for borrowers in lower categories. According to Federal Reserve staff the relatively small number of borrowers affected by these changes meant that the distributional impact was minimal. Regulators did not establish specific objectives for the $6 billion obligation they negotiated with servicers to provide foreclosure prevention actions. However, they communicated the expectation that the actions be meaningful, and they set forth broad principles for servicers’ entire portfolio of foreclosure prevention actions. To negotiate the amount and determine the design of the foreclosure prevention component of the amended orders, regulators did not follow their typical practices to inform supervisory actions, which include analysis of information. For example, analysis of the volume of servicers’ recent foreclosure prevention actions might have helped regulators assess the sufficiency and feasibility of the required obligation, among other things. According to most servicers we spoke with, they would be able to meet the required volume of activities using their existing foreclosure prevention activities. Regulators did collect data to inform oversight of servicers’ financial obligations, and OCC and the Federal Reserve are requiring examination teams to oversee servicers’ policies and monitoring controls related to the principles. However, according to Federal Reserve staff, most of the Federal Reserve examination teams have not conducted their oversight activities related to the foreclosure prevention principles and regulators’ guidance for oversight of the principles does not identify actions examination teams should take to evaluate or test implementation of these principles. According to regulators’ supervisory guidance as well as federal internal control standards, establishing specific monitoring activities, including testing, is important to effective supervision. In the absence of such monitoring activities, regulators may not know if a key element of the amended consent orders is being realized. The $6 billion foreclosure prevention action obligation amount was negotiated by regulators and servicers and was not framed by specific objectives or informed by any data or analysis. According to OCC’s and the Federal Reserve’s supervisory manuals, enforcement actions, including consent orders, are used to address specific problems, concerns, violations of laws or agreements, and unsafe or unsound practices, among other things, that are identified through supervisory examinations. Further, federal internal control standards highlight the importance of establishing clear objectives for activities undertaken by agencies as a means of ensuring that agency outcomes are achieved. The foreclosure prevention component of the amended consent orders, however, was not intended to address specific problems, violations, or unsafe or unsound practices. According to the Federal Reserve, the $6 billion required foreclosure prevention actions represent additional remediation, above and beyond the $3.9 billion cash payment required of servicers in lieu of finishing the reviews. OCC staff stated that the foreclosure prevention component of the amended consent orders mirrored the requirement that servicers provide loss mitigation options to harmed borrowers under the file review process. Although regulators negotiated the foreclosure prevention action obligations in the amendment that terminated the foreclosure review for most servicers, the foreclosure prevention obligations were not related to preliminary findings from the reviews. In addition, the actions were not specifically intended to assist only borrowers who were eligible for the reviews; servicers can count foreclosure prevention actions performed to assist any borrower in their portfolio toward their obligation under the amended consent order provided the action meets the criteria in the orders. The amended consent orders, however, directed servicers to attempt to prioritize these borrowers for assistance to the extent practicable. Regulators stated that they included the foreclosure prevention component in the amended consent orders because the National Mortgage Settlement had a similar component.component in the amended consent orders was intended to convey to servicers the importance of foreclosure prevention activities. Thomas J. Curry, Comptroller of the Currency, remarks before Women in Housing and Finance (Washington, D.C.: Feb. 13, 2013). keep borrowers in their homes and ensuring that foreclosure prevention actions are nondiscriminatory such that actions to not disfavor a specific geography, low- or middle-income borrowers, or a protected class. According to regulators, these principles were to be applied to servicers’ broad portfolio of foreclosure prevention activities (not just those undertaken as part of the $6 billion obligation under the amended consent orders). Although regulators stated they considered other similar settlements, they did not collect or analyze relevant data to inform the amount or structure of the foreclosure prevention component of the amended consent orders. According to regulators’ supervisory manuals, regulators typically analyze information to inform enforcement actions. Despite the absence of identified problems and specific objectives to guide the analysis, a variety of data were available to regulators that could potentially have informed negotiations. In addition, while it is typical for regulators and their supervised institutions to negotiate consent orders, regulators stated that the negotiations for the amended consent orders did not follow the typical enforcement action process. According to OCC staff, the decision to significantly amend the consent orders by replacing the foreclosure review with a cash payment agreement and a foreclosure prevention component was unprecedented. We recognize the atypical nature of the negotiations and regulators’ desire to distribute timely payments to eligible borrowers. However, we believe some data collection and analysis would have been feasible and useful to inform the amount and structure of the foreclosure prevention component. Regulators, in particular OCC, had access to loan-level data about some servicers’ foreclosure prevention actions through the data they collect from servicers for the quarterly OCC Mortgage Metrics reports and that servicers report to Treasury’s Making Home Affordable program, which includes Treasury’s Home Affordable Modification Program (HAMP), that they could have used to inform negotiations. Other useful data were available from servicers. The following are examples of types of analyses that could be useful to inform such negotiations. Analysis of the value of various types of foreclosure actions undertaken by servicers. Analysis of the value of various foreclosure actions undertaken by servicers may have provided information for regulators to consider in assessing the sufficiency of the negotiated amount to provide meaningful relief to borrowers. For example, data on servicers’ recent volume of foreclosure prevention actions, measured by the unpaid principal balance of loans at the time these actions were taken, as well as an average or range of unpaid principal balances for various types of actions undertaken by servicers, may have provided a basis for gauging the number of borrowers who might be helped with various amounts of foreclosure prevention obligations under the amended consent orders. Our analysis of HAMP data shows that the average unpaid principal balance for loans that received a modification through HAMP was approximately $235,000. As such, in a hypothetical scenario in which a servicer was obligated to provide $100 million in foreclosure prevention actions and reached the obligation by providing only loan modifications, it could be estimated that 425 borrowers would be assisted by the obligation as measured by the unpaid principle balance of the loans. Analysis of the volume of servicers’ typical foreclosure prevention actions. Analysis of the volume of servicers’ typical foreclosure prevention actions might have provided insight into the potential impact, if any, of the foreclosure prevention actions and informed the feasibility of the negotiated amounts—that is, the extent to which servicers could reach the required amounts within the 2-year period using their existing programs. Four of the seven servicers we interviewed that participated in amended consent orders indicated that they anticipated they would be able to meet the required volume of activity using their existing foreclosure prevention activities. Of these four servicers, two indicated they could achieve the required volume of foreclosure prevention actions within the first year, and one servicer indicated it would be easy to meet the requirement given that they regularly provide much larger amounts of foreclosure prevention assistance than their negotiated obligation. One servicer that we did not interview reported large volumes of activities using their existing programs and policies during the first 6 months of the eligible period. Specifically, between January and June 2013, the servicer reported short sale activities that were approximately 87 percent of the required obligation. During this same period, the servicer reported it had also undertaken loan modification activities that were valued at about 7 times more than their total required foreclosure prevention obligation.stated they opted to make payments to housing counseling agencies to fulfill the amended consent order requirement because they determined they would not be able to meet the obligation with their existing portfolio since the loans in the portfolio were not highly delinquent. In contrast, officials from one servicer we interviewed Analysis of alternative crediting approaches. Analysis of the results of alternative crediting approaches may have provided insight into the sufficiency of the negotiated amount—that is, the extent to which the required obligations would reach an appropriate number of borrowers as determined by regulators. The amended consent orders provide credit based on the unpaid principal balance of the loan. On the basis of this methodology, a loan with an unpaid principal balance of approximately $235,000, for example, would result in a credit of approximately $235,000 toward the servicer’s obligation, regardless of the action taken. However, alternative crediting structures exist. For example, the National Mortgage Settlement, which includes a similar foreclosure prevention component, uses an alternative approach that generally provides credit based on the amount of the principal forgiven or assistance provided. Using this methodology, for a loan modification with the same unpaid principal balance of approximately $235,000, where the principal forgiven was 29 percent of that balance (the average amount of principal forgiveness for first-lien HAMP loan modifications), a servicer would receive a credit towards their obligation of $68,855. Thus, in a hypothetical scenario in which a servicer was required to provide $100 million in foreclosure prevention actions and met the obligation by using only principle forgiveness, our analysis estimated 425 borrowers would receive assistance under the amended consent orders compared to about 1,452 borrowers under the National Mortgage Settlement. Further, analysis of the mix of servicers’ typical activities might have provided baseline information for regulators to consider in assessing whether creating incentives for certain actions by crediting them differently might be warranted to help achieve the stated expectation of keeping borrowers in their homes. consent orders, the methodology for determining credit for foreclosure prevention actions is the same for all actions, regardless of the type of action or characteristics of the loan. However, some actions are designed to keep borrowers in their homes (loan modifications, for Alternatively, some actions are designed to help avoid example). In contrast to the amended consent orders, the National Mortgage Settlement provides varying amounts of credit depending on the type of action and certain loan characteristics. Under the National Mortgage Settlement approach, a loan modification, for example, would be credited at a higher ratio than a short sale. Regulators stated they considered the National Mortgage Settlement structure in defining the types of creditable activities under the amended consent orders and the methodology for determining how the activities would be credited towards each servicer’s obligation. Foreclosure prevention actions for which servicers can receive credit under the amended consent orders are generally the same as the actions for which servicers can receive credit under the National Mortgage Settlement. However, OCC staff said they adopted a different crediting approach for the amended consent orders because it is more transparent than the approach used for the National Mortgage Settlement. foreclosure but borrowers lose their homes (e.g., short sales or deeds-in-lieu). Analysis of eligible borrowers still in their homes and in need of assistance. Analysis of the number of borrowers eligible for the foreclosure review who were still in their homes and in need of assistance might have informed the relevance of the method for allocating of the negotiated amount. Regulators generally divided the $6 billion obligation among servicers based on their share of the 4.4 million borrowers eligible for the foreclosure review, with servicers responsible for amounts that ranged from about $10 million to $1.8 billion. In addition, in the amended consent orders, regulators directed servicers to prioritize these borrowers, even though the foreclosure prevention actions were not restricted to borrowers eligible for review. However, the number of borrowers who were eligible for the foreclosure review and might benefit from the foreclosure prevention action obligations is potentially limited. Specifically, according to information on regulators’ websites covering 13 of the 15 servicers that participated in amended consent orders, 41 percent of the borrowers who were eligible for the foreclosure review had completed foreclosures as of December 31, 2011. Further, according to two servicers we interviewed, the number of borrowers who were eligible for the reviews and still able to receive foreclosure prevention actions was relatively small. For example, one servicer noted that approximately 50 percent of these borrowers were no longer being serviced by them. They added that of the remaining population, about 50 percent had already received at least one foreclosure prevention action. As such, many of the borrowers who were eligible for the foreclosure review because of a foreclosure action in 2009 and 2010 might not have been able to benefit from the foreclosure prevention actions required under the amended consent orders. To oversee the foreclosure prevention component of the amended consent orders, regulators are considering both servicers’ actions to meet the monetary obligations and the foreclosure prevention principles included in the amended orders. Regulators collected data from servicers and regulators provided guidance to examination teams to facilitate oversight activities. OCC and the Federal Reserve established reporting requirements to collect information from servicers on the foreclosure prevention actions they were submitting for crediting to meet the monetary obligations specified in the amended consent orders. To meet those obligations, servicers could either provide foreclosure prevention actions to borrowers or make cash payments to borrower counseling or education or into the cash payment funds used to pay borrowers based on categorization results. Eight of the servicers opted to meet their obligation by providing foreclosure prevention actions, and the remaining seven made cash payments. To facilitate verification of servicers’ crediting requests for foreclosure prevention actions, regulators required servicers to submit periodic reports, which all of the servicers have done. Servicers were required to submit loan-level information, such as the loan number, foreclosure status, and unpaid principal balance before and after the action, on each loan the servicers submit for crediting towards their obligation. In addition, servicers were required to state if the borrower was part of the eligible population for the foreclosure review—to respond to the expectation in the amended consent orders that, to the extent practicable, servicers prioritize eligible borrowers from the foreclosure review. According to regulators, they are in the process of hiring a third-party to evaluate the servicers’ reported data to validate that the reported actions meet the requirements of the amended consent orders and facilitate regulators’ crediting approval decisions. Servicers have begun reporting on their foreclosure prevention actions, and according to OCC staff, early submissions from servicers meeting their obligation through provision of foreclosure prevention actions to borrowers suggest they will meet their foreclosure prevention requirements quickly. The actions submitted for crediting varied, with some servicers primarily submitting short sale activities for crediting and others reporting loans that received loan modification actions. The reporting requirements also include information related to the principles established in the amended consent orders, although this information is not representative of servicers’ complete portfolio of foreclosure prevention actions. For example, servicers are required to report information on the types of assistance provided, which provides information on the extent to which the actions servicers are reporting for crediting are helping borrowers keep their homes—such as by providing a loan modification as compared to a short sale, in which a borrower would still lose his or her home. According to servicers we spoke with, the information they are reporting to regulators on their foreclosure prevention activities for crediting is not representative of their full portfolio of foreclosure prevention activities and would not provide information on how well their overall program is meeting the principles established for the assistance. For example, some servicers are submitting loans for crediting review that focus primarily on certain segments of their servicing population, such as only proprietary (in-house) loans. Another servicer had submitted all of its loss mitigation activities that may qualify for crediting according to the definitions in the amended consent orders, but this still does not represent all of their activities. Overall, the reporting requirements associated with the foreclosure prevention actions in the amended consent orders provide information to assess crediting but not to evaluate servicers’ application of the foreclosure prevention principles to their broader portfolio of loans. Regulators also issued guidance to examination teams for oversight of the foreclosure prevention principles. The guidance identifies procedures examination teams were expected to take to oversee a servicer’s application of the foreclosure prevention principles to their broad portfolio of foreclosure prevention actions. Those procedures included steps related to each of the key elements in the principles. However, the guidance does not identify actions examination teams should take to evaluate or test servicers’ application or implementation of the steps. For example, the guidance requires examination teams to describe the policies and monitoring controls servicers have in place to help ensure that their foreclosure prevention activities are nondiscriminatory, but does not set an expectation that teams will evaluate how well servicers are applying those policies and controls to their mortgage servicing practices. Similarly, the guidance requires examination teams to identify the performance measures servicers use to assess the principle related to the sustainability of foreclosure prevention actions, but the guidance does not require examination teams to evaluate how well a servicer’s programs are providing sustainable actions.foreclosure prevention actions are meaningful—one of the principles— examination teams are to collect data on the servicers’ foreclosure prevention actions, including the extent to which those actions resulted in higher or lower monthly payments, but the guidance does not require Finally, to assess whether servicers’ examination teams to evaluate the data to understand what it indicates about servicers’ actions. In contrast, other sections of the same guidance provided to examination teams for oversight of the other articles of the consent orders specify regulators’ expectations that examination teams will evaluate and test certain policies, monitoring controls, and data. For example, OCC’s guidance to oversee compliance—which is intended to assess whether servicers’ mortgage practices comply with all applicable legal requirements and supervisory guidance—identifies specific areas where examination teams should test policies and controls as well as performance measures. For instance, examination teams are expected to evaluate the servicer’s performance measures to determine the servicer’s ability to complete timely foreclosure processing, to identify and evaluate controls for preventing improper charging of late fees, and to evaluate the servicer’s staff model for certain criteria. Similarly, the Federal Reserve’s guidance specifies testing procedures for most elements of the original consent orders, such as third-party management, servicer’s compliance program, and risk management. For instance, to ensure that documents filed in foreclosure-related proceedings are appropriately executed and notarized—one of the requirements in the original consent orders—the guidance states that examination teams should review servicers’ policies, procedures, and controls to ensure that the documents are handled appropriately and then test a sample of documents to verify that notarization occurred according to the applicable requirements. According to regulators’ supervisory manuals, effective supervision requires defining examination activities, including determining clear objectives and describing the specific procedures to evaluate and test that policies and procedures are implemented. In addition, federal internal control standards require individuals responsible for reviewing management controls—such as servicers’ policies and procedures for the foreclosure prevention principles—to assess whether the appropriate policies and procedures are in place, whether those policies and procedures are sufficient to address the issue, and the extent to which the policies and procedures are operating effectively. Some examination teams are close to completing the oversight procedures related to the foreclosure prevention principles, but others have not begun, and the extent to which regulators plan to evaluate or test information collected is unclear. According to OCC staff, examination teams completed their initial oversight of these principles in December 2013, as part of their other consent order validation activities. OCC staff told us they are reviewing the results of each of the examination teams’ procedures and may identify the need for additional activity. OCC staff stated they also plan to conduct an additional review of each servicer’s foreclosure prevention actions, which will include consideration of the principles in the amended consent orders, but they do not have specific procedures to evaluate or test servicers’ implementation of those principles. According to Federal Reserve staff, most Federal Reserve examination teams have not yet conducted their oversight activities related to the foreclosure prevention principles. Federal Reserve staff told us that examination teams generally are conducting these reviews during the second quarter of 2014 and that the Federal Reserve would consider conducting additional follow-up activities related to the principles. According to federal internal control standards, management control activities should provide reasonable assurance that actions are being taken to meet requirements, such as the requirements related to the foreclosure prevention principles. not yet completed their oversight activities for the foreclosure prevention principles, the extent to which this oversight will incorporate additional evaluation or testing of servicer’s implementation of the principles is unclear. See GAO/AIMD-00-21.3.1. procedures are effective—an assessment OCC examination teams are required to make—or assessing how well the principles guide servicer behavior. For example, although servicers may have policies that explicitly forbid disfavoring low- or moderate-income borrowers during foreclosure prevention actions, without reviewing data, such as a sample of transactions from various programs, it is difficult to determine whether the policy is functioning as intended. Without these procedures, regulators may miss opportunities to determine how well servicers’ foreclosure prevention actions provide meaningful relief and help borrowers retain their homes. According to regulators we spoke with, the initial review of borrowers’ 2009 and 2010 foreclosure-related files and cash payment categorization process confirmed past servicing weaknesses—such as documentation weaknesses that led to errors in foreclosure processing—that they suspected or discovered through the 2010 coordinated review that was done in advance of the original consent orders. Regulators have taken steps to share these findings across examination teams. Continued supervision of servicers and information sharing about the experiences and challenges encountered help ensure that these weaknesses are being corrected. Recent changes to regulators’ requirements for mortgage servicing also help to address some of the issues. Although consultants generally did not complete the review of 2009 and 2010 foreclosure-related files through the file review process, consultants, servicers, and regulators were able to describe some of the servicing weaknesses they identified based on the work that was completed. According to OCC staff, these preliminary findings from consultants’ review of 2009 and 2010 foreclosure-related files were consistent with issues discovered through the earlier coordinated review of foreclosure policies and practices conducted by examination teams in 2010 that led to the consent orders. As we noted previously, the file reviews were retrospective assessments and were designed to identify and remediate the harms suffered by borrowers due to 2009 and 2010 servicing practices.practices from these file reviews, regulators asked consultants to complete an exit questionnaire and held exit interviews with each consultant to discuss the file review process and preliminary observations and findings. In addition, while consultants did not prepare final reports with their findings, regulators we spoke with said they had shared some preliminary findings with examination teams through weekly updates as the file reviews progressed. Examples of weaknesses identified during the coordinated review and confirmed during the review of files from the same period, included the following: To collect information on what was learned about servicers’ Failure to halt foreclosures during bankruptcy. The report from the regulators’ 2010 coordinated review noted that servicers’ quality controls were not adequate to ensure that foreclosures were halted during bankruptcy proceedings. These concerns were validated during the subsequent review of 2009 and 2010 foreclosure files during which consultants found some instances of foreclosures taking place after borrowers had filed for bankruptcy. Failure to halt foreclosures during loss mitigation procedures. The report from the 2010 coordinated review also expressed concern that servicers’ quality control processes did not ensure that foreclosures were stopped during loss mitigation procedures, such as loan modifications. During the subsequent file reviews, one consultant found that in some cases, a servicer had foreclosed on borrowers who were in the midst of applying for loan modifications. In addition, the file reviews identified some borrowers who were wrongfully denied loan modifications, did not receive loan modification decisions in a timely manner, or were not solicited for HAMP modifications in accordance with HAMP guidelines. Failure to apply SCRA protections. The coordinated review report also noted that a lack of proper controls could have affected servicers’ determinations of the applicability of SCRA protections. Some consultants identified issues such as servicers failing to verify a person’s military status prior to starting foreclosure proceedings and failing to consistently perform data checks to determine military status. Failure to maintain sufficient documentation of ownership. Although the 2010 coordinated reviews found that servicers generally had sufficient documentation authority to foreclose, examiners noted instances where documentation in the foreclosure file may not have been sufficient to prove ownership of the mortgage note. Likewise, during the subsequent consent order file reviews, some consultants found cases of insufficient documentation to demonstrate ownership. Weaknesses related to oversight of external vendors and documentation of borrower fees. The coordinated file review report noted weaknesses in servicers’ oversight of third-party vendors, and OCC staff stated that the subsequent file review found errors related to fees charged to borrowers, many of which occurred when servicers relied on external parties. Staff explained that servicers often did not have controls in place to ensure that services were performed as billed and that the fees charged to customers were reasonable and customary. In addition, the process of categorizing borrowers for cash payments— which relied on servicers’ data about those borrowers from 2009 and 2010—found issues that were consistent with weaknesses identified during the 2010 coordinated reviews, particularly in servicers’ data systems. For example, one examination team noted that a servicer’s data weaknesses related to servicemembers and others became more apparent during the cash payment categorization process. In addition, as noted earlier, at least 5 of the 13 servicers were unable to categorize some borrowers according to the framework criteria because of system limitations. Federal Reserve staff noted that problems with one servicer’s data related to loan modifications led the servicer to place everyone in the highest category possible rather than rely on the system. Further, another examination team told us that while reviewing the categorization of borrowers for cash payments, the servicer’s internal audit department found a high rate of borrowers incorrectly categorized in the loan modification categories due to weaknesses in the quality of the servicer’s data. The examination team explained that after reviewing the servicer’s initial categorization, regulators determined that the servicer did not have sufficiently reliable system data to categorize borrowers in the lowest categories, and therefore those borrowers were categorized in a higher category. After terminating the reviews of 2009 and 2010 foreclosure-related files, regulators instructed examination teams to identify deficiencies and monitor servicers’ actions to correct them. For example, OCC required examination teams to complete conclusion memorandums on deficiencies consultants identified. The conclusion memorandums were to include information on the deficiencies consultants identified in the servicer’s policies, procedures, practices, data, systems, or reporting. The guidance for the memorandums also asks examination teams to discuss steps servicers took to correct these deficiencies. In one conclusion memorandum, the examination team noted that the servicer was in the process of addressing issues, such as technological impediments to efficient and accurate servicing and the accurate identification of borrowers eligible for SCRA protections and borrowers in bankruptcy, but that not all issues had yet been addressed. According to Federal Reserve staff, they are not planning to do a broad analysis of the results from the file reviews, but they have asked the examination teams to consider issues that emerged from them and whether additional corrective action is needed. OCC and Federal Reserve staff also told us that examination teams are continuing their oversight activities to determine whether servicers are addressing all aspects of the consent order, which includes the areas highlighted by the preliminary file reviews. OCC staff said that the examination work is intended to determine what issues have been addressed and what issues continue to exist. Some examination teams told us that they are leveraging the results of the reviews and the cash payment categorization process by following up on some of the issues identified for the servicers they oversee in their future oversight. For example, one team said that it was following up on findings related to bankruptcy, fees, notices of loan modifications, and income calculations associated with loan modification applications. In particular, they noted that they have done subsequent testing related to borrowers in bankruptcy and will continue to assess the servicer’s efforts in this area. Another team stated that in light of challenges with an aspect of the cash payment categorization process, they identified weaknesses with the servicer’s staffing, project management, and problem resolution processes. To try to prevent repetition of these mistakes, the examination team required the servicer to identify and implement changes to their mortgage servicing practices. However, some examination teams said that little additional information was learned from the file review or cash payment activities that they could leverage in future oversight. For example, one examination team noted that because few files had gone through complete reviews, they could not determine how widespread the problems found were. They said that because the file reviews were terminated before the reviews were completed, they did not have sufficient information to interpret the initial findings. Another examination team told us that no new information was learned from the file reviews and all of the issues raised during them were known issues. A third examination team told us that they would incorporate some aspects of the consultant’s processes into their review process, but the reviews were not far enough along to draw conclusions about any additional substantive weaknesses with the servicer’s practices. In addition, Federal Reserve staff noted that because the file reviews were terminated before many data points were collected, what could be learned from them is limited. Similarly, one examination team noted that while weaknesses were identified with the servicer’s operations during both the file review and cash payment processes, they were specific to systems and activities from 2009 and 2010 that were no longer in place or operational. Additionally, OCC staff explained that because the files that were reviewed were from 2009 and 2010, the findings may no longer be applicable, particularly given changes in servicing operations since that time. Because examination teams learned different information from their oversight of the file review and cash payment processes, sharing each others’ experiences could be instructive for ongoing oversight of mortgage servicing. As we noted earlier, the completion rates for the file review process varied from no files with a completed review to 57 percent of the planned files reviewed. In addition, the areas that were reviewed varied among servicers. For example, several of the consultants reported completing at least initial reviews of the majority of files in the bankruptcy category. Another consultant stated that the only category of review completed was the SCRA category, and therefore, it only had findings related to the retention of SCRA data. A third consultant had completed its review of a majority of the initial files planned for review, and had found several different types of errors, including errors with fees charged, loan modification decisions, and documentation of ownership. Although, as regulators have noted, each servicer has unique operations and data systems, servicing standards and other requirements defined by regulators are generally broadly applied and insight from one servicer’s approach to meet these standards—or problems meeting these standards—can be instructive for another examination team responsible for overseeing these same standards. According to our analysis of examination teams’ conclusion memorandums, some servicers encountered similar challenges in the cash payment process. In contrast to the file review process, the borrower categorization process was completed for 14 of the servicers and servicers had to place borrowers into the same categories. Several examination teams and a servicer noted that merging data from multiple servicing systems posed particular challenges for completing the borrower categorization process. Other examination teams we spoke with described challenges servicers encountered with their data systems to record information on bankruptcy and other foreclosure-related actions. Understanding what caused similar types of challenges and their prevalence among servicers may help regulators identify future areas for oversight activities. According to regulators, they have taken steps to share information among examination teams about issues encountered during the file review and cash payment process and OCC planned to take additional steps. For example, regulators told us that during the file review and cash payment categorization process, OCC and Federal Reserve examination teams held weekly phone meetings. According to several examination teams we spoke with, during these meetings they would highlight challenges they were encountering, such as issues related to missing data in a servicer’s systems. In addition, Federal Reserve staff stated that Federal Reserve examination teams met during the cash payment categorization process to share information on their approach to the activities and discuss approaches different teams were taking to address challenges. To further facilitate information sharing among examination teams, Federal Reserve staff told us that examination teams posted to a shared website their conclusion memorandums for the cash payment activities, which included information on the approach servicers used to categorize borrowers. According to OCC staff, they are also writing a consolidated conclusion memorandum that will summarize examination teams’ findings from the foreclosure review process, including information on specific challenges identified at individual institutions that may be instructive for other examination teams. According to regulators, examination teams also have offered to share information with CFPB about issues encountered during the file review process. Banking regulators and CFPB have entered into a Memorandum of Understanding, which states that CFPB and the regulators will endeavor to inform each other of issues that may impact the supervisory interests of the other agencies. According to regulators we spoke with, there has been limited sharing of findings from the foreclosure review process with CFPB. According to OCC staff, in some cases, they have shared information with CFPB about servicers’ compliance with the original consent orders and, in other instances, they offered to provide CFPB information on the file review process, but CFPB had not requested follow-up information. Federal Reserve staff said two of its examination teams have provided information to CFPB on the Federal Reserve’s monitoring activities related to the original consent orders, including the file reviews, and amended consent orders. Recent servicing requirements, some of which apply to a broader group of mortgage servicers than those included in the file review process, may also address some of the weaknesses found during the 2010 coordinated review and confirmed during the review of foreclosure-related files from 2009 and 2010 and the borrower categorization process. Since the 2009 and 2010 period of the file reviews, regulators have issued several guidelines and standards related to mortgage servicing: April 2011 Consent Orders. In addition to the requirement to conduct file reviews of borrowers who were in foreclosure or had completed foreclosure any time in 2009 or 2010, the original consent orders issued by OCC and the Federal Reserve to 16 servicers also included other requirements, such as submitting a plan for improving the operation of servicers’ management information systems for foreclosure and loss mitigation activities. Regulators’ examination teams will continue to monitor these requirements and ensure that the aspects of the consent orders that apply are met. National Mortgage Settlement. Five servicers are covered by the National Mortgage Settlement, which includes requirements such as preforeclosure notices to borrowers, procedures to ensure the accuracy of borrower accounts, and quarterly reviews of foreclosure documents. CFPB Mortgage Servicing Rules. These rules were issued in January 2013, became effective January 10, 2014, and apply to all servicers, The rules cover several with some exemptions for small servicers.major topics that address many aspects of mortgage servicing, including specific requirements related to communication with delinquent borrowers and loss mitigation procedures. OCC and Federal Reserve Imminent Foreclosure Standards. In April 2013, OCC and the Federal Reserve issued checklists to the servicers they supervise to establish minimum standards for handling and prioritizing of borrower files that are subject to imminent foreclosure sales. For example, both sets of standards require that once the date of foreclosure is established, the servicer must confirm that the loan’s default status is accurate. These requirements address issues identified during the file reviews and cash payment process. For example, to address issues related to borrowers being foreclosed upon while in the process of a loan modification application, OCC and Federal Reserve’s Minimum Standards for Prioritization and Handling of Borrower Files Subject to Imminent Foreclosure Sales require servicers to take steps to verify a borrower’s status once a foreclosure date has been established. Specifically, servicers must promptly (1) determine whether the borrower has requested consideration for, is being considered for, or is currently in an active loss mitigation program; and (2) determine whether the foreclosure activities should be postponed, suspended, or cancelled. As another example, to address issues related to communicating loan modification decisions to borrowers, CFPB’s rules state that servicers must provide the borrower with a written decision, including an explanation of the reasons for denying the loan modification, on an application submitted within the required time frame. The guidelines also address issues related to servicers’ data systems. For example, CFPB’s rules require that servicers are able to compile a complete servicing file in 5 days or less. CFPB officials noted that this requirement was specifically included to address weaknesses in servicers’ data systems that might still exist. In addition, as previously noted, the OCC and Federal Reserve consent orders required servicers to submit a plan for the operation of their management information systems. The plan needed to include a description of any changes to monitor compliance with legal requirements; ensure the accuracy of documentation of ownership, fees, and outstanding balances; and ensure that loss mitigation, foreclosure, and modification staff have sufficient and timely access to information. Regulators took steps to promote transparency through efforts to keep borrowers and the general public informed about the status and progress of amended consent order and continuing review activities and through posting information publicly on their websites. Regulators also plan to issue public final reports on the cash payment process and foreclosure prevention actions as well as the results of the one file review that continued. These actions, however, have included limited information on processes, such as specific information about the category in which borrowers were placed or how those determinations were made. In our March 2013 report, we found that transparency on how files were reviewed under the foreclosure review was generally lacking and that borrowers and the general public received limited information about the progress of reviews.implement a communication strategy to regularly inform borrowers and We recommended that regulators develop and the public about the processes, status, and results of the activities under the amended consent orders and continuing foreclosure reviews. Since the announcement of the amended consent orders and our March 2013 report, regulators have taken steps to keep borrowers and the general public informed about the status of activities under the amended consent orders and continuing foreclosure reviews. For example, regulators directed that the payment administrator for 14 of the 15 servicers subject to amended consent orders send postcards to approximately 4.4 million borrowers informing them that they would receive a cash payment from their servicer. In addition, regulators directed the administrator to send communications to borrowers subject to the continuing file review to inform them that their reviews were ongoing. OCC staff noted that they anticipated requiring a final communication to borrowers when the review is completed. Regulators also kept the general public informed about the status of activities. For example, regulators conducted two webinars to provide details on the amended consent order activities and published answers to frequently asked questions on their websites. Regulators also used mass media such as press releases and public service announcements to communicate the status of activities. In addition, regulators updated their websites with information on the number and amount of checks issued and cashed under the amended consent orders, and in May 2013, regulators reported this information by state. Finally, regulators also made the cash payment frameworks and borrower categorization results publicly available on their websites. The frameworks list the payment categories and amounts and also include the overall results of the cash payment process by including the number of borrowers in each payment category. Regulators plan to issue publicly final reports on the direct payment process and foreclosure prevention actions as well as information from the reviews that were terminated and the results of the review that continued. We noted the importance of public reporting to enhancing transparency in our March 2013 report. At that time, regulators planned to release reports on the foreclosure review and cash payment process, but the content of the reports had not been determined. Since our report, regulators have taken additional steps toward making reporting decisions. However, they are still considering the content and timing of these reports. Federal Reserve staff stated that they have worked with OCC to reach out to community groups to get their input on the information to include in public reports, and they are reviewing the types of information on foreclosure prevention actions reported for the National Mortgage Settlement and HAMP. Federal Reserve staff also stated that they anticipate the final report would include information on the terminated reviews. OCC staff said they are conducting examinations to assess the extent to which servicers addressed all aspects of the consent orders, including weaknesses highlighted by the preliminary file reviews, and they anticipate reporting on conclusions of the foreclosure reviews, including the reviews that were terminated. OCC staff stated they are waiting on the results of the continuing review and reports on servicers’ foreclosure prevention actions before making final reporting decisions. Although regulators have taken steps to promote transparency, these actions included limited information on the data regulators considered in negotiating the cash payment obligations and the processes for determining cash payment amounts. Our March 2013 recommendation to implement a communication strategy included not only keeping borrowers informed about the status and results of amended consent order and continuing review activities, but it also included keeping borrowers and the public informed about processes to determine those results. In our March 2013 report, we found that more publicly disclosed information about processes could have increased transparency and thereby public confidence in the reviews, given that one of the goals regulators articulated for the foreclosure review was to restore public confidence in the mortgage market. Federal internal control standards state the importance of relevant, reliable, and timely communications within an organization as well as with external stakeholders. In addition, our prior work on organizational transformation suggests that policymakers and stakeholders demand transparency in the public sector, where stakeholders are concerned not only with what results are to be achieved, but also with which processes are to be used to achieve those results. Regulators released limited information on the process used to determine cash payment amounts. Regulators’ joint press release announcing the payment agreement stated that the amounts of borrowers’ payments depended on the type of possible servicer error, and regulators’ websites and webinars provided information on the roles of regulators, servicers, and the payment administrator. However, regulators did not release publicly information on the criteria for borrower placement in each category, such as the specific loan and borrower characteristics associated with each category. In addition, information about the process for determining cash payment amounts for each category was not communicated to individual borrowers. Borrowers subject to the amended consent orders received postcards informing them they would receive a cash payment. The postcards, however, did not include information about the process by which their payment amounts would be determined. Moreover, the letter accompanying the cash payment does not include information about the category in which a borrower was placed. Consumer groups we interviewed maintained that borrowers should have been given information about the category into which they were placed and an explanation of how they were categorized. Regulators said that borrowers could obtain additional information from other sources. Federal Reserve staff explained that the letter to borrowers does not include information on the borrower’s cash payment category, but they said that a borrower may be able to figure out this information using the publicly issued cash payment framework, which includes cash payment amounts for each category. Regulators also told us that borrowers could call the payment administrator with questions or complaints related to the cash payment process under the amended consent orders. However, according to the payment administrator’s protocol, staff were instructed to provide general information on the cash payment process, but did not have specific information about the category in which borrowers were placed or how those determinations were made. Federal Reserve staff stated that borrowers who have complaints about their servicer could also write to their servicer’s regulator directly, but consumer groups said that very few borrowers would file a formal complaint with the regulators because they never received an explanation of what category they were placed in and regulators did not establish an appeals process. Further, letters sent to borrowers stated that the payments were final and there was no appeals process. Regulators told us they did not establish an appeals process because borrowers did not waive their rights to take legal action by accepting the payment. Federal Reserve staff stated that although there was not a process for borrowers to appeal their payments, borrowers who are not satisfied with the payment amounts can pursue any legal claims they may have. With additional information on processes, regulators have opportunities to enhance transparency and public confidence with the amended consent order activities. The majority of cash payments have been deposited. As such, regulators have missed key opportunities to provide information that would have enhanced transparency of the cash payment process for individual borrowers. Further, since borrowers cannot obtain further information by formally appealing the results of the direct payment process, the lack of information about the criteria for placement in the various categories may hinder public confidence in the process. The final reports that regulators plan to issue represent an important opportunity to provide additional information on processes to clarify for borrowers and the general public how payment decisions were made. The amended consent order process—with the distribution of cash payments to 4.4 million borrowers and requirements that servicers provide $6 billion in foreclosure prevention actions—terminated the review of 2009 and 2010 foreclosure-related files for 15 servicers prior to completion. This process addressed some of the challenges identified by regulators with the file review process—for example, it provided cash payments to borrowers more quickly than might have occurred had the file reviews continued. In addition, through the foreclosure prevention component of the amended orders, regulators were able to convey their commitment to specific principles to guide loss mitigation actions— including that servicers’ foreclosure prevention activities provide meaningful relief to borrowers and not disadvantage a specific group. While views varied on the usefulness of the file review process, regulators are taking steps to use what was learned to inform future supervisory activities. While regulators used the amended consent orders to establish principles for foreclosure prevention activities, they did not require examination teams to evaluate or test servicers’ activities related to these principles. In particular, they did not require evaluation or testing of servicers’ policies, monitoring controls, and performance measures, to determine the extent to which servicers are implementing these principles to provide meaningful relief to borrowers. In contrast, other parts of the guidance provided to examination teams for oversight of the consent orders do require evaluation and testing, and the requirements in regulators’ supervisory manuals and federal internal control standards also include such requirements. For OCC examination teams, which have completed reviews of servicers’ activities related to the foreclosure prevention principles, additional planned supervisory activities, such as a review of servicers’ foreclosure prevention actions, may help identify concerns with servicers’ implementation of aspects of the foreclosure prevention principles. However, the specific procedures to conduct these additional planned activities have not been established. In comparison, for Federal Reserve examination teams that have not yet completed the reviews, there is an opportunity to implement a more robust oversight process that includes evaluation and testing, but the extent to which the Federal Reserve will take these steps is unclear. In the absence of specific expectations for evaluating and testing servicers’ actions to meet the foreclosure prevention principles, regulators risk not having enough information to determine whether servicers are implementing the principles and protecting borrowers. Finally, although regulators communicated information about the status and results of the cash payment component of the amended consent orders, they missed opportunities to communicate additional information to borrowers and the public about key amended consent order processes. One of the goals that motivated the original file review process was a desire to restore public confidence in the mortgage market. In addition, federal internal control standards and our prior work highlight the importance of providing relevant, reliable, and timely communications, including providing information about the processes used to realize results, to increase the transparency of activities to stakeholders—in this case, borrowers and the public. Without making information about the processes used to categorize borrowers available to the public, such as through forthcoming public reports, regulators may miss a final opportunity to address questions and concerns about the categorization process and increase confidence in the results. We are making the following three recommendations: 1. To help ensure that foreclosure prevention principles are being incorporated into servicers’ practices, we recommend that the Comptroller of the Currency direct examination teams to take additional steps to evaluate and test servicers’ implementation of the foreclosure prevention principles. 2. To help ensure that foreclosure prevention principles are being incorporated into servicers’ practices, we recommend that the Chairman of the Board of Governors of the Federal Reserve System ensure that the planned activities to oversee the foreclosure prevention principles include evaluation and testing of servicers’ implementation of the principles. 3. To better ensure transparency and public confidence in the amended consent order processes and results, we recommend that the Comptroller of the Currency and the Chairman of the Board of Governors of the Federal Reserve System include in their forthcoming reports or other public documents information on the processes used to determine cash payment amounts, such as the criteria servicers use to place borrowers in various payment categories. We provided a draft of this report to OCC, the Federal Reserve, and CFPB for comment. We received written comments from OCC and the Federal Reserve; these are presented in appendixes III and IV. CFPB did not provide written comments. We also received technical comments from OCC, the Federal Reserve, and CFPB and incorporated these as appropriate. In their comments on this report, the Federal Reserve agreed with our recommendations and OCC did not explicitly agree or disagree. However, OCC and the Federal Reserve identified actions they will take or consider in relation to the recommendations. For the two recommendations on assessing servicer implementation of foreclosure prevention principles, OCC stated that it included this requirement in its examination plans. OCC added that foreclosure prevention principles will be used as considerations when assessing the effectiveness of servicer actions. We continue to believe that identifying specific procedures for testing and evaluating servicers’ application of the foreclosure prevention principles to their mortgage servicing practices will help regulators determine how effectively servicers’ policies and procedures are protecting borrowers and providing meaningful relief. The Federal Reserve noted that examination teams plan to use testing during their servicer assessments. The Federal Reserve plans to conduct the assessments in 2014, as we noted in the report. For the recommendation on improving the transparency of the consent order processes, OCC stated that it will consider including additional detail about the categorization of borrowers in its public reports. The Federal Reserve said it will consider the recommendation as it finalizes reporting and other communication strategies. Both regulators also noted that they had made information about the foreclosure review and amended consent order processes available on their public websites. As we discussed in our report, regulators have taken steps to communicate information about the status of activities and results of the amended consent orders, and communicating information on the processes for determining borrowers’ cash payment amounts provides an additional opportunity for regulators to realize their goal of increasing public confidence in these processes. We are sending copies of this report to interested congressional committees, the Board of Governors of the Federal Reserve System, the Consumer Financial Protection Bureau, and the Office of the Comptroller of the Currency. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix V. The objectives of this report were to assess: (1) the factors regulators considered in negotiating servicers’ cash payment obligations under the amended consent orders and the extent to which regulators achieved their stated goals for the cash payments; (2) the objectives of the foreclosure prevention actions in the amended consent orders and how well regulators designed and oversaw the actions to achieve those objectives; (3) the extent to which regulators are sharing information from the file review and amended consent order processes; and (4) the extent to which regulators have promoted transparency of the amended consent orders and remaining review. The scope of our work covered the 16 servicers that were issued consent orders in 2011 and 2012 requiring they conduct file reviews. To address the factors the Office of the Comptroller of the Currency (OCC) and the Board of Governors of the Federal Reserve System (Federal Reserve) considered in negotiating servicers’ cash payment obligations, we interviewed regulatory staff about the factors they considered and analyses they conducted to inform the negotiations. We also asked staff about the extent to which the factors and analyses differed from typical enforcement action negotiations. We reviewed the analyses regulators’ used to inform the negotiations and other documentation on the decision to replace the foreclosure review with a cash payment agreement, such as OCC’s decision memorandum. We also reviewed data consultants provided to regulators on incurred and remaining costs, progress of reviews, and findings of error. In addition, we conducted a sensitivity analysis to test the impact of changes to major assumptions and a reasonableness review of the final negotiated cash payment amount. According to Office of Management and Budget guidance, a sensitivity analysis examines the effects of changing assumptions and ground rules on estimates. Further, our Cost Estimating and Assessment Guide states that a sensitivity analysis provides a range of results that span a best and worst case spread and also helps identify factors that could cause an estimate to vary. To conduct our sensitivity analysis, we followed three key steps outlined in our Guide: (1) identify the key drivers and assumptions to test, (2) estimate the high and low uncertainty ranges for significant input variables, and (3) conduct this assessment independently for each input variable. We identified and tested major assumptions related to projected costs, error rates, and borrower categorizations. We also used the results of our analysis to test the reasonableness of the final negotiated cash payment amount. Our Cost Estimating and Assessment Guide describes a reasonableness review as a process to independently test whether estimates are reasonable with regard to the validity of major assumptions. Projected costs. To test assumptions related to the projected remaining costs to complete the reviews as reported by consultants, we calculated monthly costs for each consultant using consultants’ cost reports that were available from September 2012 through December 2012. We then selected the shortest, median, and longest projected additional months of review across servicers to calculate the projected costs under these scenarios (see table 4). We compared our calculated costs in these scenarios to regulators’ cost analyses and the final negotiated cash payment amount. Error rate. To test assumptions related to the error rate, we reviewed error rates in status reports consultants provided to regulators for the 13 servicers that agreed to the payment agreement in the January 2013. The amended consent orders implementing the payment agreement required the consultants of the participating servicers to submit data on the progress of the file reviews as of December 31, 2012. We used these data, which the consultants submitted to regulators in the months following the payment agreement, to select the lowest, median, aggregate, and highest error rates reported by consultants and calculated the potential remediation payments under these scenarios (see table 5). We compared our calculated remediation payments under these scenarios to the payment calculated in regulators’ analyses and the final negotiated cash payment amount. Borrower categorization. To test assumptions related to the categorization of borrowers across the payment categories used in OCC’s error rate analysis, we analyzed borrower distributions for the other five servicers involved in the initial amended consent order negotiations. We used categorizations servicers provided to the regulators during the negotiation process in December 2012. We then calculated the potential remediation, using the 6.5 percent financial harm error rate used in regulators’ analysis, under each scenario (see table 6). We compared our calculated remediation payments under these scenarios to the payment calculated in regulators’ analyses and the final negotiated cash payment amount. We verified the accuracy of regulators’ analyses by performing some logic tests and recreating the tables and formulas they used for their calculations. To assess the reliability of data on the status and preliminary financial harm error rates we used in our analyses, we collected information from exam team staff for all servicers that participated in the amended consent order payment agreement. Because exam team staff were responsible for the day-to-day oversight of consultants’ work, we collected information on the steps they took to determine whether the data were reasonably complete and accurate for the intended purposes. All exam team staff stated they conducted data reliability activities such as observing data entry procedures and controls, participating in or observing training for the systems used to generate status reports, conducting logic tests, or reviewing status reports. Exam team staff did not note any limitations related to the results of the final reviews completed by consultants as of December 2012 that would affect our use of these data. As such, we determined the data to be sufficiently reliable for the purposes of this report. We were unable to assess the reliability of data on consultants’ incurred costs or servicers’ initial borrower categorization results used in our analyses. Because most consultants had terminated their work on the foreclosure review during our data collection, we had limited access to the underlying cost data reported by consultants to regulators, and regulatory staff told us they did not assess these data. In addition, the initial borrower categorizations performed by servicers during negotiations represented preliminary results that were intended to provide regulators with information about how the cash payment amount might be distributed. These data were described as preliminary by servicers, and neither servicers nor regulatory staff validated the accuracy of the information used during negotiations. Given that limited information was available from the sources and users of these data, we were not able to assess their reliability. As such, we determined that the data related to consultants’ costs and servicers’ initial borrow categorizations are of undetermined reliability. However, because our use of these data is consistent with regulators’ intended use to inform negotiations we determined that the risk of using data of undetermined reliability was low, and we concluded that the data were appropriate for our purposes in this report. To determine the stated goals for the cash payments and assess the extent to which regulators took steps to ensure servicers achieved them, we reviewed the amended consent orders, OCC’s and the Federal Reserve’s decision memorandums, and statements made by regulators about the amended consent orders, including press releases and speeches or testimony. We then assessed achievement of these goals using data we collected and analyzed and information from interviews we conducted with regulators. Specifically, we reviewed regulators’ instructions to servicers and examination teams for the categorization process and subsequent oversight activities and interviewed OCC headquarters and Federal Reserve Board staff about implementation of these activities and their oversight actions. In addition, we analyzed regulators’ reports on the results of the servicers’ categorization process, in particular information on the number of borrowers placed into each category by servicer and any subsequent changes to categorization results. We also reviewed examination teams’ conclusion memorandums describing their oversight activities to verify and validate servicers’ cash payment categorization activities, and 10 of the 11 examination teams we interviewed or received written responses from provided information about their specific approach. We also interviewed three consultants responsible for categorizing borrowers into some categories—for example, borrowers eligible for protections under the Servicemembers Civil Relief Act (SCRA), Pub. L. No. 108-189, 117 Stat. 2835 (2003) (codified at 50 U.S.C. app. §§ 501-597b)—about their methodology and regulators’ oversight, and of the eight servicers we interviewed seven provided information about their process to categorize borrowers for cash payments and regulators’ role in this process. To identify the examination teams and servicers to interview, we selected examination teams and servicers that were overseen by each regulator and also considered a range of sizes of eligible populations for the file reviews, including some of the largest servicers. To identify the consultants to interview, we considered those consultants that supplemented information gathered from consultants in prior work on the file review process. Finally, we assessed the reliability of these data by reviewing related documentation and interviewing payment administrator officials knowledgeable about the data. We determined that these data were sufficiently reliable for the purposes of this report. To assess the objectives for the foreclosure prevention actions and how well regulators designed the actions to realize those objectives, we reviewed the amended consent orders to understand the parameters and requirements for foreclosure prevention actions, reviewed regulators’ decision memorandums, and reviewed regulators’ statements about the foreclosure prevention actions in press releases and speeches or testimony. We also interviewed regulators about their intentions for the actions and the analysis they conducted to support the negotiations of the design and amounts. We compared this process with regulators’ typical processes for issuance of enforcement actions, as described in their supervisory manuals and in interviews with regulators’ staff. We also interviewed three experts familiar with negotiations and the design of settlements, including staff from the National Mortgage Settlement, to understand elements typically considered in the design of settlements. We selected these experts based on their familiarity with similar mortgage servicing settlements or their recognized expertise in the field of settlements involving potential financial harm or where cash payments were to be made to victims. In addition, we interviewed staff from one regulatory agency, the Bureau of Consumer Financial Protection (commonly known as the Consumer Financial Protection Bureau, or CFPB), about their policies and procedures for negotiating enforcement actions, in particular related to mortgage servicing. Finally, we reviewed two settlements that included foreclosure prevention components—the National Mortgage Settlement and the separate California Agreement in the National Mortgage Settlement—to help identify various factors to consider in the design of foreclosure prevention actions in enforcement orders or settlements. Further, to address how regulators oversaw achievement of the objectives of the foreclosure prevention component in the amended consent orders, we considered both regulators’ activities to oversee servicers’ financial obligations and actions to oversee the foreclosure prevention principles in the amended consent orders. To facilitate this process, we reviewed regulators’ instructions to servicers for reporting on their foreclosure prevention obligations and servicers’ reporting submissions for May, July, September, and December 2013. We also reviewed OCC’s and the Federal Reserve’s instructions to its examination teams for oversight of the foreclosure prevention principles. To further understand regulators’ oversight of the financial obligations and foreclosure prevention principles, we interviewed OCC and Federal Reserve staff, including headquarters and Federal Reserve Board staff and staff from 10 of the 11 examination teams we interviewed— representing both OCC and the Federal Reserve and a mix of larger and smaller servicers (determined by the number of eligible borrowers from the foreclosure review)—about their oversight activities. We compared these instructions and their implementation with the supervisory expectations in regulators’ supervisory manuals, the supervisory instructions for the other articles of the original consent orders, and federal internal control standards. To supplement our understanding of the foreclosure prevention reporting and oversight activities, we interviewed representatives from six of the eight mortgage servicers we spoke with (representing servicers overseen by both OCC and the Federal Reserve of various sizes based on the size of the eligible population from the foreclosure review) about their activities to comply with the foreclosure prevention requirement and regulators’ oversight activities. We also interviewed staff from the National Mortgage Settlement, which requires five mortgage services to provide foreclosure prevention actions, to understand their experience and approach. To assess the extent to which regulators are leveraging and sharing information from the file review process, we analyzed consultants’ preliminary findings from the file review process, in particular information they reported to regulators in exit surveys and during exit interviews with regulators. We also reviewed OCC’s examination teams’ conclusion memorandums from their oversight of the file review process. We compared these with publicly available information on regulators’ findings from the 2010 coordinated file review conducted by OCC, the Federal Reserve, the Federal Deposit Insurance Corporation, and the Office of Thrift Supervision to identify the extent to which the findings were similar. We also interviewed staff from OCC headquarters and Federal Reserve Board and 10 of the 11 examination teams, and representatives from 8 mortgage servicers about what they learned about mortgage servicing from the preliminary file reviews and cash payment categorization processes and changes in mortgage servicing practices since the 2009 and 2010 period covered by the file review process. In addition, we asked regulator staff, including the examination teams, about steps they had taken or were planning to take to share this information among examination teams or with other regulators, such as CFPB, or to use this information for future oversight. We also interviewed CFPB staff about information they had requested or received about the preliminary file review results. We compared regulators’ plans to share and leverage information with federal internal control standards for recording and communicating information to help management and others conduct their responsibilities. To assess regulators’ efforts to promote transparency of the amended consent orders and remaining review, we reviewed press releases and documents from regulators related to the amended consent orders and the remaining review. In particular, we reviewed what documents were available about the amended consent orders and the remaining review on the regulators’ websites, such as frequently asked questions, webinars, press releases, and status updates related to check issuance, and analyzed the content of these materials. We also reviewed the payment administrator’s telephone instructions to respond to questions about the amended consent order process. In addition, we reviewed examples of the postcards and letters sent to borrowers to communicate about the amended consent order payments and to provide cash payments. We also interviewed regulator staff about the steps they took to promote transparency and their plans for future reporting. We compared this documentation to federal internal control standards on communications and our work on organizational transformation to identify any similarities or differences. Further, we considered our prior recommendation about lessons learned about transparency of the foreclosure review for the amended consent order process. Finally, we also conducted interviews with representatives of consumer groups. We conducted this performance audit from May 2013 through April 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We have issued two prior reports on the foreclosure review process. In our first report on the outreach component of the foreclosure review, we found that the Comptroller of the Currency (OCC) and the Board of Governors of the Federal Reserve System (Federal Reserve) and servicers had gradually improved the communication materials for borrowers, but that regulators could make further enhancements to the outreach efforts. In our second report, we identified lessons learned from the file review process that could be used to enhance the activities under the amended consent orders and the continuing reviews. Below we list the recommendations made in each report and the actions taken by regulators in response. In addition to the contact named above, Jill Naamane (Assistant Director), Bethany M. Benitez, Maksim Glikman, DuEwa Kamara, John Karikari, Charlene J. Lindsay, Patricia MacWilliams, Marc Molino, Jennifer Schwartz, Andrew Stavisky, Winnie Tsen, and James Vitarello made key contributions to this report. | In 2011 and 2012, OCC and the Federal Reserve signed consent orders with 16 mortgage servicers that required the servicers to hire consultants to review foreclosure files for errors and remediate harm to borrowers. In 2013, regulators amended the consent orders for all but one servicer, ending the file reviews and requiring servicers to provide $3.9 billion in cash payments to about 4.4 million borrowers and $6 billion in foreclosure prevention actions, such as loan modifications. One servicer continued file review activities. GAO was asked to examine the amended consent order process. This report addresses (1) factors considered during cash payment negotiations between regulators and servicers and regulators' goals for the payments, (2) the objectives of foreclosure prevention actions and how well regulators designed and are overseeing those actions to achieve objectives, and (3) regulators' actions to share information from the file review and amended consent order processes and transparency of the processes. GAO analyzed regulators' negotiation documents, oversight memorandums, and information provided to borrowers and the public about the file review and amended consent orders. GAO also interviewed representatives of regulators, servicers, and consultants. To negotiate the $3.9 billion cash payment amount in servicers' amended consent orders, the Office of the Comptroller of the Currency (OCC) and the Board of Governors of the Federal Reserve System (Federal Reserve) considered information from the incomplete foreclosure review, including factors such as projected costs for completing the file reviews and remediation amounts that would have been paid to borrowers. To evaluate the final cash payment amount, GAO tested regulators' major assumptions and found that the final negotiated amount generally fell within a reasonable range. Regulators generally met their goals for timeliness and amount of the cash payments. By December 2013, cash payments of between $300 and $125,000 had been distributed to most eligible borrowers. Rather than defining specific objectives for the $6 billion in foreclosure prevention actions regulators negotiated with servicers, regulators identified broad principles, including that actions be meaningful and that borrowers be kept in their homes. To inform the design of the actions, regulators did not analyze available data, such as servicers' recent volume of foreclosure prevention actions, and did not analyze various approaches by which servicers' actions could be credited toward the total of $6 billion. Most servicers GAO spoke with said they anticipated they would be able to meet their obligation using their existing level of foreclosure prevention activity. In their oversight of the principles, OCC and the Federal Reserve are verifying servicers' foreclosure prevention policies, but are not testing policy implementation. Most Federal Reserve examination teams have not begun their verification activities and the extent to which these activities will incorporate additional evaluation or testing of servicers' implementation of the principles is unclear. Regulators' manuals and federal internal control standards note that policy verification includes targeted testing. Without specific procedures, regulators cannot assess implementation of the principles and may miss opportunities to protect borrowers. Regulators are sharing findings from the file reviews and amended consent order activities among supervisory staff and plan to issue public reports on results, but they have not determined the content of those reports. The file reviews generally confirmed servicing weaknesses identified by regulators in 2010. Regulators are sharing information among examination teams that oversee servicers, and some regulator staff GAO spoke with are taking steps to address weaknesses identified. Regulators also have promoted transparency by releasing publicly information on the status of cash payments. However, these efforts provided limited information on the processes used, such as how decisions about borrower payments were made. Federal internal control standards and GAO's prior work ( GAO-03-102 and GAO-03-669 ) highlight the importance of providing relevant information on the processes used to obtain results. According to regulators, borrowers could obtain information from other sources, such as the payment administrator, but information on how decisions were made is not available from these sources. In the absence of information on the processes, regulators face risks to public confidence in the mortgage market, the restoration of which was one of the goals of the file review process. OCC and the Federal Reserve should define testing activities to oversee foreclosure prevention principles and include information on processes in public documents. In their comment letters, the regulators agreed to consider the recommendations. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
To increase the involvement of religious organizations in the delivery of social services, the Congress included charitable choice provisions in the legislation for several federal programs. These provisions were designed to remove legal or perceived barriers that religious organizations might face in contracting with the federal government. First enacted in 1996, charitable choice provisions apply to administrators, service providers, and recipients of TANF and WTW funds, as established through PRWORA. Subsequently, the Congress included charitable choice provisions in the 1998 reauthorization of the CSBG program and the amendments to the Public Health Services Act in 2000 affecting the SAPT block grant program. Funding levels for programs with charitable choice provisions vary considerably, with TANF having the highest level of funding (see table 1). These programs allocate funds in a variety of ways. TANF, CSBG, and SAPT are block grants, which are distributed in lump sums to states. WTW has two funding streams, one of which is comprised of state formula grants that are mostly passed on to localities and the other representing a smaller portion of funds called national competitive grants, which the Department of Labor awarded directly to local applicants. Most federal funding for these programs is administered by state or local government entities, which have the ability to contract with social service providers, including religious organizations. In addition to establishing that FBOs can compete for public funds while retaining their religious nature, charitable choice provisions are intended to safeguard the interests of the various parties involved in financial agreements to provide services (see table 2). While charitable choice provisions vary somewhat by program, they all share common themes of protecting religious autonomy among service providers, safeguarding the interests of beneficiaries of federally funded services, and ensuring that all contracting agencies, including religious organizations, are held financially accountable. Overall, FBOs contracted for a small proportion of the government funding available to nongovernmental contractors under the four programs we examined. Contracts with FBOs accounted for 8 percent (or about $80 million) of the $1 billion in federal and state TANF funds spent by state governments on contracts with nongovernmental entities in 2001, and 2 percent (or about $16 million) of the $712 million Welfare-to-Work competitive grant funds in fiscal years 1998 and 1999. National data are not available on the proportion of contracted funds FBOs received for CSBG, SAPT, and Welfare-to-Work formula grants. However, state data indicate that FBOs received a small proportion of CSBG and SAPT funds in the five states we visited. All FBOs that we visited had tax-exempt status and most were incorporated separately from religious institutions. In addition, a majority had established contracts with the government before the passage of charitable choice provisions in legislation; most were affiliated with Christian denominations; and most contracted for TANF funds. Under the contracts we examined, FBOs provided an array of services in line with the key uses of each program’s funds and sometimes provided additional services such as mentoring or fatherhood training. Contracting with FBOs constituted a relatively small proportion of all contracting with nongovernmental entities using federal and state TANF funds in 2001, according to our national survey. TANF contracting occurs only at the state level in 24 states, only at the local level in 5 states, at both levels in 20 other states, and in the District of Columbia. TANF contracting does not occur in South Dakota. The majority of the approximately $1 billion in federal and state TANF funds spent by state governments on contracts with nongovernmental entities nationwide went to secular nonprofit organizations, as shown in figure 1. In contrast, contracts with FBOs accounted for 8 percent of the contracted funds. While FBOs received a small proportion of federal and state TANF funds contracted out in 2001 at the state level, this proportion varied considerably across states, as shown in table 3. New Jersey spent over 32 percent of these funds on contracts with FBOs. Nine states and the District of Columbia spent more than 15 percent of these federal and state TANF funds on contracts with FBOs. In contrast, 23 states awarded to FBOs less than 5 percent of the federal and state TANF funds they contracted out to nongovernmental organizations. While table 3 depicts contracting by state governments, it does not include information on contracting by local entities. In states such as California, New York, and Texas, TANF contracting occurs predominately at the local level. Our national survey of TANF contracting identified more than $500 million in local government contracts with nongovernmental entities. About 8 percent of these funds were with FBOs. In addition, national data show that a small proportion of WTW competitive grant funds went to FBOs. According to Labor, 6 of 191 contracts for these funds went to FBOs in fiscal years 1998 and 1999; these contracts totaled $16.2 million, or approximately 2 percent of WTW competitive grant funds in those years. National data are not available to indicate the magnitude of contracting with FBOs in other charitable choice programs we examined. Labor did not have information about the proportion of WTW formula grants that went to FBOs. States administer these grant funds through local entities. In addition, HHS has not compiled national data on the level of contracting with FBOs using CSBG and SAPT funds. Although national information is not available, in the five states we visited we found that FBOs received 9 percent or less of SAPT funds contracted out by states. In addition, FBOs represented between 2 and 20 percent of the organizations licensed or certified by these five states to provide substance abuse treatment services, as shown in table 4. In addition, in the five states we visited, FBOs received a small proportion of the overall CSBG funds passed through by states. States allocate these funds to “eligible entities,” primarily community action agencies (CAAs), which include mostly private, nonprofit organizations but also some public agencies. None of the eligible entities in the five states we visited were FBOs. However, some of them subcontracted with other providers, including FBOs, for services. In Texas and Washington, FBOs received more than half of these subcontracted funds, as shown in table 5. All of the FBOs we visited had tax-exempt status; most were incorporated separately from religious institutions; and a majority of them had a fairly long history of contracting with the government. While 31 of the 35 FBO contractors we visited had been established to be independent of religious institutions, all of them had tax-exempt status under section 501(c)(3) of the Internal Revenue Code. Several of these FBOs told us that they needed this status to compete for nongovernmental sources of funding, such as funding from private foundations. Some FBOs noted that this status established them as a legal entity separate from a church so that the church would be protected from liability for the services the FBO offered. Moreover, some FBO officials told us that 501(c)(3) status gave their program added credibility and an established presence in the community. Of the 35 FBO contractors we visited, 21 had contracted with the government before the passage of charitable choice legislation in the relevant programs. One FBO had provided services through government contracts since 1913. The FBOs we selected for interviews in the five states we visited varied in size and structure but shared some commonalities. While some FBOs were very small, operating on a budget of less than $200,000, others had large annual budgets, as high as $60 million. Some of the FBOs we visited operated independently; some were multidenominational coalitions of churches; and others were affiliated with a national religious organization, such as Catholic Charities, the Association of Jewish Family & Children’s Services, or the Salvation Army. Twenty-nine of the 35 FBOs were affiliated with the Christian faith and included various Christian denominations, for example, Baptist, Methodist, and Lutheran. Finally, about two-thirds of these FBOs contracted for TANF-funded services. FBOs we visited contracted for services that matched the key uses of each program’s funds and sometimes included additional features. While more FBOs provided services closer to the key uses of TANF program funds, such as job preparation, several of the FBOs contracting for TANF services included fatherhood programs or forms of mentoring in their programs. FBOs that contracted for WTW funds mostly provided job training and placement; one also helped clients find daycare services. FBOs contracting for SAPT funds provided prevention and treatment of substance abuse. The two FBOs that contracted for CSBG funds offered services that included parent education, case management for families with a variety of needs, and medical services. While charitable choice has created opportunities for FBOs, several factors continue to constrain some FBOs from contracting with the government. These factors include FBOs’ limited awareness of funding opportunities, limited administrative and financial capacity, inexperience with government contracting, and beliefs about the separation of church and state. However, most of these limitations are not unique to FBOs but are common to small, inexperienced organizations seeking to enter into contracts with government. Although most officials in the states we visited reported no legal barriers to prevent religious organizations from partnering with government, some officials noted that their history of a strong separation of church and state might lead all parties to be cautious about collaboration. Government agencies in the states we visited differed in their approaches to identification and removal of constraints that can limit financial contracting between FBOs and government. Most states we visited have broadened access to information and provided assistance for FBOs, while others have been less active in identifying and addressing constraints. Federal agencies have also taken steps to address constraints by establishing funding for small faith-based and community organizations to develop or expand model social service programs. Small FBOs are generally unaware of funding opportunities unless they have past experience with government, according to some FBO and government officials we interviewed. Notices about funding opportunities are sent to current provider mailing lists, to newspapers, and sometimes to agency Web sites. Because state and local governments are not required to promote a broader awareness of funding opportunities for new providers under current charitable choice provisions, government agencies in less active states have not taken steps to disseminate information about funding opportunities to FBOs. As a result, potential service providers that are not on current notification lists, including FBOs, may remain unaware of upcoming funding opportunities while experienced providers have advance notice. Moreover, small, inexperienced FBOs are disadvantaged by their limited administrative capacity, according to many government and FBO officials we interviewed. Small FBO providers often lack the administrative resources necessary to deal with the complex paperwork requirements of government contracting. Local program officials said that some new FBO providers may have never submitted a budget, or may overestimate their capacity to provide services, or may have difficulty with reporting requirements. Some small FBOs we interviewed rely on one person—who may have other duties—or a small number of staff and volunteers, to perform administrative tasks. Government officials told us that small faith- based contractors inexperienced in government contracting often required administrative and technical assistance. Similarly, FBO officials have expressed concerns about the financial constraints of government contracting. Some FBO officials we interviewed reported experiencing cash flow problems resulting from start-up costs and payment delays. In some cases, their churches helped with start-up funds, or other expenses, including overhead and indirect assistance. Furthermore, in a March 2001 survey conducted by the Georgia Faith- Based Liaison, religious leaders reported that while they were interested in government contracting, they had concerns regarding their limited financial capacity to manage publicly funded programs. These same leaders also expressed concerns about their financial capacity if they were to offer child-care or social services for welfare clients because of the risks associated with payment delays. Most state and local officials in the states we visited reported that no legal barriers exist to prevent FBOs from contracting with the government in programs with charitable choice provisions. However, some officials noted that perceptions about the separation of church and state might cause both FBO and government officials to be cautious about entering into contracts. One state lawmaker in Georgia identified the state’s constitution as one source of this perception, noting that it contains language forbidding the funding of religious organizations with state funds. Because of confusion over whether the state constitution also applied to federal funds, Georgia adopted a law that specified that charitable choice allowed religious organizations to receive federal funding. Most government officials we interviewed told us that state licensure or certification requirements for substance abuse treatment providers do not restrict religious organizations from participating in publicly funded treatment programs. However, in all of the states we visited, substance abuse treatment providers are required to be licensed or certified in order to be eligible for publicly funded contracts. Government officials noted that because the health and safety requirements attached to licensing can be costly, they might pose a barrier to small FBOs that want to be licensed to offer this service. To address this, lawmakers in the state of Washington proposed easing licensing requirements for FBO substance abuse treatment providers. However, this proposal was not approved because of concerns that this would lower standards for FBO providers. Government and FBO officials we interviewed in several states reported that some FBOs prefer not to partner with government for various reasons. For example, some faith-based providers do not want to separate their religion from their delivery of services. In a recent survey conducted by Oklahoma’s Office of Faith-Based and Community Initiative to identify barriers to collaboration, religious leaders reported that they were concerned about potential erosion of their religious mission, government intrusion into affairs of the congregation, and excessive bureaucracy. While states we visited differed in their approaches, some states have taken more active strategies toward addressing factors that constrain FBOs from government contracting. Some states, such as Texas and Virginia, established task forces to advise the governor or legislature about actions for improving government collaboration with FBOs. To promote awareness and facilitate collaborations with FBOs, 20 states have appointed faith-based liaisons since the enactment of charitable choice provisions in the current law. Four of the five states we visited directed outreach activities to engage religious leaders and government officials in discussions of the perceived barriers to collaboration and to promote awareness of funding opportunities. Some states took steps to strengthen the administrative capacity of FBOs by providing informational opportunities and developing educational material for FBOs unfamiliar with government contracting. Indiana, Virginia, and Texas conducted informational sessions and workshops for FBOs. In addition, Virginia and Indiana created educational handbooks dedicated to new faith-based social service providers with information on topics such as applying for government funding, writing grants, and forming a nonprofit, tax-exempt 501(c)(3) organization. Some state and local officials we interviewed told us that they offer assistance and administrative information to any small, new provider during the pre- contracting phase. Other states, which we did not visit, reported that they created separate funding for their faith-based initiatives. New Jersey set up its own Office of Faith-Based and Community Initiative and funded it using only state funds, according to the New Jersey faith-based liaison. This office began awarding grants for services such as day care, youth mentoring, and substance abuse treatment to FBOs in 1998 and plans to award $2.5 million in grants this year to faith-based providers. North Carolina developed a “Communities of Faith Initiatives,” which set aside $2.45 million in TANF funds for its Faith-Demonstration awards in 1999 and 2000 to contract with various FBOs for job retention and follow- up demonstration pilots. Federal agencies have also acted to identify and address constraints to government collaborations with FBOs. President Bush issued two executive orders in January 2001, establishing the White House Office of Faith-Based and Community Initiatives and Centers for Faith-Based and Community Initiatives in five federal agencies. These agencies have reported on barriers to collaboration with FBOs and outlined recommendations to address some of the barriers. Moreover, a Compassion Capital Fund of $30 million was approved in the fiscal year 2002 budget as part of the Labor, HHS, and Education appropriations.The funds are to be used for grants to charitable organizations to emulate model social service programs and encourage research on the best practices of social services organizations. In addition, Labor established another funding source to enhance collaborations with faith-based and community providers. Labor’s Employment and Training Administration announced on April 17, 2002, the availability of grant funding geared toward helping faith-based and community-based organizations participate in the workforce development system. In the five states we visited, understanding and implementation of charitable choice safeguards differed, and the incidence of problems involving safeguards is unknown. A few of the safeguard provisions specified in federal law are subject to interpretation, and federal agencies have issued limited guidance on how to interpret them. As a result, some government and FBO officials expressed confusion concerning two matters: (1) allowable activities under the prohibition on the use of federal funds for religious instruction or proselytizing and (2) FBOs’ ability to hire on the basis of faith. State and local government entities also differed in how they interpret the charitable choice safeguards and their approaches to communicating them to FBOs. Officials in the states we visited reported receiving few complaints from FBO clients. These officials relied on complaints and grievance procedures to identify discrimination or proselytizing, and in some cases FBOs and clients may not be aware of the charitable choice safeguards. Therefore, violations of the safeguard requirements may go unreported or undetected. In the 6 years since charitable choice provisions were passed as part of PRWORA, federal agencies have issued limited guidance to state agencies concerning charitable choice safeguards—such as the prohibition on the use of federal funds for religious instruction or proselytizing—and how they should be implemented. Even though HHS has recently created a charitable choice Web site outlining most of the safeguards and has sponsored workshops featuring charitable choice issues, it has not issued guidance to states on the meaning of the provisions designed to safeguard parties involved in government contracting. According to an HHS official, although they have drafted guidance for charitable choice provisions as they apply to substance abuse prevention and treatment programs, this document has not been released. HHS officials told us that the agency did not write regulatory language concerning charitable choice and TANF because PRWORA specifically limits HHS from regulating the conduct of states under TANF, except as expressly provided in the law. While PRWORA includes charitable choice provisions, the law does not indicate that HHS may prescribe how states must implement these provisions. With respect to CSBG funds, HHS’s Office of Community Services has distributed an information memorandum to states communicating the safeguards as they are listed in the CSBG law, but this memorandum does not offer guidance on how states should interpret the safeguard provisions. Finally, Labor’s solicitation of grant applications for WTW competitive grants specifically mentioned that FBOs were eligible to apply for the funds, but Labor did not issue guidance concerning charitable choice safeguards. Labor reported that in the case of WTW formula grants, the only information it gave to states was to note charitable choice provisions in the planning guidance it issued initially for the program. Most state and local officials we interviewed knew that charitable choice provisions were meant to allow FBOs to participate in the contracting process on the same basis as other organizations and understood that the law prohibits the use of public funds for religious worship, instruction, or proselytizing; however, they often differed in their understanding of allowable religious activities. Several state and local officials reported that prayer was not allowed in the delivery of publicly funded social services, while many FBO officials said that voluntary prayer was permissible during such services. PRWORA and other laws with charitable choice provisions do not define what constitutes proselytizing or religious worship and federal guidance concerning this matter has not been issued to state and local government entities. Without guidance from HHS, consistency in interpretations is unlikely. Some state, local, and FBO officials we interviewed were unaware of the charitable choice safeguard allowing religious organizations to retain limited exception to federal employment discrimination law. This safeguard exempts religious organizations from the prohibition against discrimination on the basis of religion in employment decisions, even when they receive federal funds. For example, even though the law allows FBOs to make hiring decisions on the basis of faith, one government official said that the boilerplate language in the agency’s contracts with service providers specifically indicates that providers are not allowed to discriminate in employment decisions on the basis of religion. Other state and local officials we interviewed were aware of this safeguard, but some perceived it to be in conflict with local antidiscrimination laws. In particular, one local agency official said that up to 17 percent of the local population consisted of sexual minorities and expressed concern that they would be discriminated against in both the hiring and the delivery of services. In contrast, almost all FBO officials we interviewed said that they do not consider faith when making hiring decisions for any of their organizations’ positions. In addition, all FBO officials we interviewed said they do not consider the faith of the client in the delivery of their services. Some states were more active than others in communicating charitable choice safeguards to the various parties involved in contracting. For example, the state of Virginia enacted legislation to include all charitable choice provisions in Virginia’s procurement law. These provisions were included in its technical assistance handbook for faith- and community- based organizations and used as a curriculum for educating over 1,000 representatives from faith- and community-based groups on charitable choice safeguards, such as the FBOs’ right to display religious symbols. Virginia also distributed a statement that local agencies under Virginia procurement law must give to all clients informing them of their right to an alternative (nonreligious) provider under charitable choice. Indiana’s Family and Social Services Administration implemented a similar practice. States also communicated the safeguards by including various charitable choice provisions in contracts or requests for proposals (RFP). State and local government contracting entities in Indiana, Virginia, and Texas included information in their TANF RFPs specifically stating that FBOs were eligible to apply for federal funds. The Indiana Family and Social Services Administration’s Indiana Manpower Placement and Comprehensive Training Program and the Texas Department of Human Services included all charitable choice safeguards in their contracts with TANF service providers. Georgia has recently passed legislation to implement charitable choice provisions; however, both Georgia and Washington do not currently include any charitable choice language in their TANF contracts or RFPs. Washington state officials said that after reviewing the charitable choice statutory provisions, they decided that no action was required because they already contracted with FBOs. Government officials said that in practice, safeguards were most often verbally communicated, many times through technical assistance workshops or bidders’ conferences. However, most of the FBOs we interviewed said that the contracting agency had not explained the provisions to them. In addition, few local and FBO officials we interviewed recalled receiving any guidance on the safeguards, informal or otherwise, from state or local officials, respectively. In the five states we visited, government officials reported few problems concerning FBO use of federal funds for proselytizing, discrimination against clients, or client requests for alternative (nonreligious) providers; however, the incidence of violations of these safeguard requirements is unknown. FBOs we interviewed did not report any intrusive government behavior that interfered with their ability to retain their religious nature under charitable choice. These FBOs often displayed religious symbols and none said that government officials restricted this ability under charitable choice by asking them to remove religious icons. In Texas, one lawsuit was filed against an FBO for allegedly using public funds to purchase bibles for a charitable choice program, and the case was dismissed in federal court. However, almost all of the government and FBO officials we interviewed said that they had not received any complaints from clients about the religious nature of an FBO. Officials in the five states we visited also said that few clients had asked for an alternative (nonreligious) provider, one of the charitable choice protections afforded to clients who object to receiving services from a religious organization. However, only two of the five states we visited, Indiana and Virginia, issued written guidance to inform clients that they had this right to an alternative (nonreligious) provider, and these two states only recently issued such guidance. Texas includes such information in its TANF contracts, but requires that the provider communicate this information to the client. Failure to communicate information about this safeguard to clients raises the possibility that some clients who may prefer to receive services from a nonreligious provider may not be aware of their right to do so. The majority of state and local agencies relied on complaint-based systems to identify violations of the charitable choice safeguard requirements. Agency officials typically monitored financial and programmatic aspects of the services. A few officials said that any “red flags” would show up during regular programmatic monitoring, and that such indications would be the basis for further investigation. Nonetheless, it is not clear whether there are violations of the safeguard requirements that go unreported or undetected because clients and FBOs may not be aware of the safeguard provisions. FBOs are held accountable for performance in the same way as other organizations that contract with the government, according to state and local officials in the five states we visited. Most officials said that all contractors are held accountable on the basis of the same standards, such as those contained in the contract language. None of the officials said that FBOs are held to a different standard, either higher or lower, compared to other contractors. Most agencies responsible for monitoring contractors said that they monitored all contracting organizations in the same way, whether faith-based or not. None of the state and local officials we interviewed said that they monitored FBOs differently from other organizations. Monitoring activities included program audits, financial audits, and regular performance reports from FBOs. Although FBOs are held accountable for performance in the same way as non-FBOs, comparative information on contractor performance is unavailable for several reasons. One reason is that cost-reimbursement contracts, used by many of the agencies in the five states visited, pay contractors on the basis of the allowable costs they incur in providing services, rather than performance outcomes—the results expected to follow from a service. In contrast, performance-based contracts, which were used by some of the agencies visited, pay contractors on the basis of the degree to which the services performed meet the outcomes set forth in the contract. Examples of such performance outcomes include the percentages of clients that obtain or retain employment for a specified period of time. However, even when contracts specified expected outcomes, some state and local officials said that comparative information on contractor performance was unavailable. In the five states, specified performance outcomes sometimes varied with each contractor individually, often because contractors either provided different services or the same services to different populations. In Indiana, for example, TANF contractors proposed their performance outcomes as part of the bidding process on the basis of the local agency’s needs. While specified performance outcomes sometimes differed on the basis of the services provided and the populations served, none of the state and local officials told us that these performance outcomes varied according to whether the contractor was faith based. While contractors shared the same specified performance outcomes in a few cases, state and local officials had not compared the performance of FBOs to that of other contractors. Many officials told us that they did not track the performance of FBOs as a group at all. For example, one state- level agency tracked substance abuse treatment outcomes by providers but had not identified which contractors were FBOs. Most state and local officials that provided their opinion believed that their FBO service providers performed as well as or better than other organizations overall, even though they did not provide data regarding FBO performance. Research efforts are currently under way to provide information on the performance of FBOs in delivering social services. Researchers at Indiana University-Purdue University Indianapolis are conducting a 3-year evaluation comparing the performance of FBOs and non-FBOs in Indiana, Massachusetts, and North Carolina. Researchers expect to complete the study in 2003. In addition, in February 2002, The Pew Charitable Trusts awarded a $6.3 million grant to the Rockefeller Institute of Government, based at the State University of New York in Albany, to study the capacity and effectiveness of FBOs in providing social services and other issues. While HHS and Labor have taken steps to increase awareness of funding opportunities for religious and community organizations, state and local government officials and FBO officials continue to differ in their understanding of charitable choice rules, particularly regarding specific safeguards designed to protect the various parties involved in financial arrangements, including FBOs and clients. In addition, clients are sometimes not being informed about the safeguards that are specifically designed to protect them. This is a problem because government entities generally rely on complaints from clients to enforce such safeguards. When all parties are not fully aware of their rights and responsibilities under charitable choice provisions, violations of these rights may go undetected and unreported. While HHS officials said that they interpret PRWORA to mean that the agency does not have the authority to issue regulations on charitable choice for TANF programs, HHS does have the authority to issue other forms of guidance to states for TANF programs. Additional guidance to clarify the safeguards and suggest ways in which they can be implemented would promote greater consistency in the way that government agencies meet their responsibilities in implementing charitable choice provisions. Without guidance from HHS, consistency in the interpretation of charitable choice provisions is unlikely. Because the WTW funds were not reauthorized and all funds have been distributed to grantees, the issuance of guidance by Labor to states is no longer needed. In order to promote greater consistency of interpretation and implementation of charitable choice provisions, we recommend that the Secretary of HHS issue guidance to the appropriate state and local agencies administering TANF, CSBG, and SAPT programs on charitable choice safeguards, including the safeguard prohibiting the use of federal funds for religious worship, instruction, or proselytizing and the safeguard concerning a client’s right to an alternative (nonreligious) provider. In particular, this guidance should offer clarification concerning allowable activities that a religious organization may engage in while retaining its religious nature. We provided a draft of this report to HHS and Labor for their review. HHS agreed with our recommendation and said that it is in the process of developing and issuing guidance to the appropriate state and local agencies administering these programs. HHS also provided detailed information on how it plans to use the $30 million Compassion Capital Fund, which is intended to assist FBOs and community-based organizations. HHS’s comments are reprinted in appendix II. Labor had no formal comments. HHS and Labor also provided technical comments that we incorporated as appropriate. As arranged with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies of this report to the Secretary of Health and Human Services, the Secretary of Labor, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7215. Other contacts and staff acknowledgments are listed in appendix III. To obtain specific information about how charitable choice has been implemented, we visited 5 states—Georgia, Indiana, Texas, Virginia, and Washington. We selected these states to obtain a range in the levels of both state government activities with regard to faith-based initiatives and contracting with faith-based organizations, as well as geographic dispersion. In addition, we did telephone interviews with faith-based liaisons established in 15 states (these were all of the liaisons that had been established as of September 2001). To identify what is known about the extent and nature of faith-based organization (FBO) contracting, we compiled information from several sources. We analyzed results from our national survey of Temporary Assistance for Needy Families (TANF) contracting of all 50 states, the District of Columbia, and the 10 counties with the largest federal TANF- funding allocations in each of the 13 states that locally administer their TANF programs. In addition, we interviewed state and local program officials that administer TANF, Welfare-to-Work (WTW), Community Services Block Grant (CSBG), and Substance Abuse Prevention and Treatment (SAPT) funded programs in the states we visited. Finally, we analyzed documents and data provided to us by federal, state, and local officials. To identify the extent of FBO contracting in the WTW program, we obtained national information from the Department of Labor, which oversees this program. To identify the extent of FBO contracting in the SAPT block grant programs in the 5 states visited, we contacted state officials responsible for these programs to obtain data on certified substance abuse treatment providers eligible to receive federal funds and contracting under this program. To identify the extent of FBO contracting in the CSBG programs in these states, we contacted state officials responsible for CSBG funded programs to obtain data on FBO contracting and subcontracting. To identify the nature of services provided in the four programs, we contacted federal, state and local officials overseeing these programs. In addition, we visited FBOs that contracted with the government and some that did not have contracts. We also reviewed relevant documents related to the contracting process. To obtain information on the implementation of charitable choice, including factors that constrain FBOs in contracting with the government, implementing safeguard provisions, and the performance of FBOs, we met with officials at the Departments of Health and Human Services and Labor in Washington, D.C., that oversee the TANF, WTW, CSBG, and SAPT programs. We conducted telephone interviews with faith-based liaisons in 15 states and on-site interviews with state and local officials in various locations in Georgia, Indiana, Texas, Virginia, and Washington. To obtain the perspective of FBOs, we also interviewed FBO officials that have had contracts with the government under these programs, as well as some that do not have contracts with the government. In addition, we interviewed researchers that have conducted related studies on charitable choice implementation and the relative performance of FBOs. We also reviewed audit reports for the two federal agencies that oversee these programs. Finally, we analyzed documents that we obtained from federal, state, and local officials, including contracts, guidance, and communications regarding charitable choice implementation. In addition to the above contacts, Mary E. Abdella, Richard P. Burkard, Jennifer A. Eichberger, Randall C. Fasnacht, and Nico Sloss made important contributions to this report. Charitable Choice: Overview of Research Findings on Implementation. GAO-02-337. Washington, D.C.: January 18, 2002. Regulatory Programs: Balancing Federal and State Responsibilities for Standard Setting and Implementation. GAO-02-495. Washington, D.C.: March 20, 2002. Welfare Reform: Federal Oversight of State and Local Contracting Can Be Strengthened. GAO-02-661. Washington, D.C.: June 11, 2002. Welfare Reform: States Provide TANF-Funded Services to Many Low- Income Families Who Do Not Receive Cash Assistance. GAO-02-564. Washington, D.C.: April 5, 2002. Welfare Reform: More Coordinated Federal Effort Could Help States and Localities Move TANF Recipients With Impairments Toward Employment. GAO-02-37. Washington, D.C.: October 31, 2001. Welfare Reform: Progress in Meeting Work-Focused TANF Goals. GAO-01- 522T. Washington, D.C.: March 15, 2001. Welfare Reform: Moving Hard-to-Employ Recipients into the Workforce. GAO-01-368. Washington, D.C.: March 15, 2001. Welfare Reform: Data Available to Assess TANF’s Progress. GAO-01-298. Washington, D.C.: February 28, 2001. Drug Abuse: Research Shows Treatment Is Effective, but Benefits May Be Overstated. GAO/HEHS-98-72. Washington, D.C.: March 27, 1998. | The federal government spends billions of dollars annually to provide services to the needy directly, or through contracts with a large network of social service providers. Faith-based organizations (FBO), such as churches and religiously affiliated entities, are a part of this network and have a long history of providing social services to needy families and individuals. In the past, religious organizations were required to secularize their services and premises, so that their social service activities were distinctly separate from their religious activities, as a condition of receiving public funds. Beginning with the passage of the Personal Responsibility and Work Opportunity Reconciliation Act of 1996, Congress enacted "charitable choice" provisions, which authorized religious organizations to compete on the same basis as other organizations for federal funding under certain programs without having to alter their religious character or governance. The statutory provisions cover several programs, including Temporary Assistance for Needy Families (TANF) and Welfare to Work. Similar provisions also apply to the Community Services Block Grant and the substance abuse prevention and treatment programs. GAO found that faith-based organizations receive a small proportion of the government funding provided to nongovernmental contractors. Contracts with faith-based organizations accounted for 8 percent of the $1 billion in federal and state TANF funds spent by state governments on contracts with nongovernmental entities in 2001. Although charitable choice was intended to allow FBOs to contract with government in these programs, several factors continue to constrain the ability of small FBOs to contract with the government. These factors include FBO's lack of awareness of funding opportunities, limited administrative and financial capacity, inexperience with government contracting, and beliefs about the separation of church and state. State and local officials differed in their understanding and implementation of certain charitable choice safeguards, such as the prohibition on the use of federal funds for religious worship or instruction; however, the incidence of problems involving safeguards is unknown. Faith-based organizations are held accountable for performance in the same way as other organizations contracting with the government. However, little information is available to compare the performance of FBOs to that of other organizations. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The nation’s special operations forces provide the National Command Authorities a highly trained, rapidly deployable joint force capable of conducting special operations anywhere in the world. In November 1986, Congress enacted section 1311 of Public Law 99-661, which directed the President to establish USSOCOM, a unified combatant command to ensure that special operations forces were combat ready and prepared to conduct specified missions. USSOCOM’s component commands include AFSOC, the Army Special Operations Command, the Naval Special Warfare Command, and the Joint Special Operations Command. AFSOC, located at Hurlburt Field, Florida, deploys and supports special operations forces worldwide. To ensure that special operations were adequately funded, Congress further provided in section 1311 of Public Law 99-661 that the Department of Defense create for the special operations forces a major force program (MFP) category for the Future Years Defense Plan of the Department of Defense. Known as MFP-11, this is the vehicle to request funding for the development and acquisition of special operations-peculiar equipment, materials, supplies, and services. The services remain responsible under 10 U.S.C. section 165 for providing those items that are not special operations-peculiar. Since Operation Desert Storm, AFSOC’s threat environment has become more complex and potentially more lethal. More sophisticated threat systems, both naval and land-based, have been fielded, and the systems are proliferating to more and more countries. Even nations without complex integrated air defense systems have demonstrated the capability to inflict casualties on technologically superior opponents. According to threat documents, worldwide proliferation of relatively inexpensive, heat-seeking missiles is dramatically increasing the risk associated with providing airlift support in remote, poorly developed countries. Increased passive detection system use is also expected throughout lesser developed countries. Passive detection allows the enemy to detect incoming aircraft without alerting the crew that they have been detected, thereby jeopardizing operations. Finally, commercially available, second-generation night vision devices, when linked with warfighter portable air defense systems (e.g., shoulder-fired missiles), provide these countries with a night air defense capability. This night air defense capability is significant because AFSOC aircrews have historically relied on darkness to avoid detection. AFSOC aircraft carry a wide variety of electronic warfare systems to deal with enemy threat systems. Some of AFSOC’s systems are common with systems used by the regular Air Force, while others are unique to special operations. Memoranda of Agreement (MOA) between USSOCOM and the military services lay out specifically the areas of support the services agree to undertake in support of the special forces. An MOA, dated September 16, 1989, and one of its accompanying annexes, dated February 22, 1990, entered into between the Air Force and USSOCOM list those items and services the Air Force agrees to fund in support of AFSOC’s special operations mission. This list includes modifications common to both AFSOC and regular Air Force aircraft, and electronics and telecommunications that are in common usage. Part of AFSOC’s electronic warfare equipment for fixed-wing aircraft is acquired with USSOCOM MFP-11 funds as special operations-peculiar items because the Air Force historically has employed very little electronic warfare equipment on its C-130s. AFSOC’s acquisition strategy for electronic warfare equipment is contained within AFSOC’s Technology Roadmap. The Technology Roadmap identifies and ranks operational deficiencies and links the deficiencies to material solutions. The Roadmap flows out of AFSOC’s mission area plans for mobility, precision engagement/strike, forward presence and engagement, and information operations. For C-130s, the Roadmap indicates that AFSOC has serious electronic warfare operational deficiencies in several areas and identifies solutions for each of these operational deficiencies. These solutions include introducing a mix of new systems and making upgrades to older systems (See app. II for descriptions of AFSOC’s C-130 aircraft.) AFSOC’s acquisition strategy is sound because it is based on eliminating operational and supportability deficiencies confirmed by an Air Force study, test reports, and maintenance records. According to AFSOC officials responsible for electronic warfare acquisition, AFSOC’s C-130s are most vulnerable to three types of threat systems: (1) infrared missiles, (2) passive detectors, and (3) radar-guided missiles.These deficiencies have become more critical since Operation Desert Storm in 1991 as more sophisticated threats have been developed and spread to more areas of the world. An ongoing Air Force Chief of Staff directed study, the Electronic Warfare Operational Shortfalls Study, confirms what AFSOC officials maintain. This study found that there are many electronic warfare-related operational deficiencies within the overall Air Force, including the C-130 community. The study identified deficiencies with missile warning system missile launch indications and warning times, infrared expendables and jamming effectiveness, signature reduction, passive detection, situational awareness, and electronic warfare support equipment. Classified test reports and threat documentation corroborate the study’s findings. According to Air Force officials, electronic warfare deficiencies within Air Force components, including AFSOC, are so extensive that the solutions necessary to correct all of them are not affordable within the framework of Air Force fiscal year 2000-2005 projected budgets. AFSOC’s aging electronic warfare systems are also failing more often and requiring more staff hours to support. According to AFSOC’s Technology Roadmap and maintenance records, all AFSOC electronic warfare systems have some supportability problems. AFSOC maintenance personnel told us that they are working more hours to repair the systems, and maintenance records show that system failures are becoming more frequent. The ALQ-172(v)1 high band radar jammer in particular is problematic, requiring more staff hours for maintenance than any other AFSOC electronic warfare system. The staff hours charged for maintaining the ALQ-172(v)1 represent 34 percent of the total time charged to maintaining all electronic warfare systems from 1995 through 1997. AFSOC has made several efforts to correct deficiencies and maximize commonality in electronic warfare systems. USSOCOM is funding the Common Avionics Architecture for Penetration (CAAP) program, which is designed to make AFSOC’s C-130 aircraft less susceptible to passive detection, enhance the aircrews’ situational awareness, lower support costs, and improve commonality. AFSOC has sought to begin several other efforts in the past several years, as well, but USSOCOM has rejected these requests. In addition to addressing deficiencies identified in the Technology Roadmap, AFSOC is trying to improve commonality among its electronic warfare systems by eliminating some of those systems from its inventory. For example, it is replacing the ALR-56M radar warning receiver on its AC-130U Gunships with the ALR-69 radar warning receiver already on the rest of its C-130s. AFSOC also planned to replace ALQ-131 radar jamming pods on its AC-130H Gunships with a future upgraded ALQ-172(v)3 radar jammer for its AC-130s and MC-130Hs. Achieving commonality avoids duplicating costs for system development, lowers unit production costs through larger quantity buys, and simplifies logistical support. According to USSOCOM officials, in selecting what to fund they had to determine which programs would maximize capability, including sustainability, while conserving resources. The USSOCOM officials said that these decisions were difficult because although some systems offer tremendous improvements in capabilities, they require significant commitment of resources. For instance, USSOCOM did not have sufficient resources to fund both the CAAP program and the ALQ-172(v)3 upgrade program to improve commonality and capability against radar-guided missiles. Additionally, AFSOC had planned to replace its ALE-40 flare and chaff dispensers with the newer programmable ALE-47 to improve protection against infrared-guided missiles. But, because of budget constraints, AFSOC will have to keep the ALE-40 on two of its C-130 model aircraft while the other models are upgraded to the ALE-47 configuration. Furthermore, in prioritizing resources for fiscal year 2000-2005, USSOCOM is accepting increased operational and sustainment risks for systems it does not anticipate being key in 2010 or beyond. Under this approach, USSOCOM is dividing AFSOC’s C-130s into so-called legacy and bridge aircraft. The older legacy aircraft will receive flight safety modifications but not all electronic warfare upgrades; newer bridge aircraft will receive both. As a result, the legacy aircraft will become less common over time with the newer bridge aircraft, even as they become more vulnerable to threats and more difficult to maintain. Because, according to AFSOC officials, the legacy aircraft are planned to remain in service for 12 more years, for the foreseeable future, AFSOC will have to operate and maintain more types of electronic warfare systems. Since AFSOC’s electronic warfare acquisition strategy was adopted, the Air Force has decided to fund a $4.3-billion Air Force-wide C-130 modernization program of all C-130s, including the special operations fleet. This avionics modernization program shares many common elements with the USSOCOM CAAP program. CAAP includes $247 million of MFP-11 funds for upgrades/systems to address AFSOC’s C-130 aircraft situational awareness and passive detection problems. Consistent with the provisions of title 10, the MOA requires that the Air Force, rather than USSOCOM, fund common items. Therefore, the overlap between the two programs creates an opportunity for USSOCOM to direct its MFP-11 funding from CAAP to other solutions identified in AFSOC’s Technology Roadmap instead of paying for items that will be common to all Air Force C-130s. The Air Force is funding its avionics modernization program to lower C-130 ownership costs by increasing the commonality and survivability of the C-130 fleet. Because USSOCOM designed CAAP independently of and earlier than the Air Force modernization program, CAAP provides funding for a number of items that are now planned to be included in the Air Force program. These include (1) an open systems architecture, (2) upgraded displays and display generators, (3) a computer processor to integrate electronic warfare systems, (4) a digital map system, and (5) a replacement radar. USSOCOM and AFSOC officials note that these C-130 modernization program items have the potential to satisfy CAAP requirements with only minor modifications. For example, AFSOC’s estimates indicate that the cost to develop and procure a new low-power navigation radar with a terrain following/terrain avoidance feature as part of CAAP would be approximately $133 million. However, if the navigation radar selected for the avionics modernization program incorporates or has a growth path that will allow for the addition of a low-power terrain following/terrain avoidance feature to satisfy CAAP requirements, USSOCOM could avoid the significant development and procurement costs of the common items. According to Air Force, USSOCOM and AFSOC officials, coordinating these two programs would maximize C-130 commonality and could result in additional MFP-11 funding being available to meet other AFSOC electronic warfare deficiencies. Consistent with the provisions of title 10, and as provided for in the MOA between the Air Force and USSOCOM, the Air Force has included the AFSOC C-130 fleet in its draft planning documents to upgrade the C-130 avionics. However, while the MOA requires the Air Force to pay for common improvements incorporated into AFSOC’s C-130, the Air Force may not pay for special operations-peculiar requirements as part of the common upgrade. Nevertheless, the Air Force is not otherwise precluded from selecting systems that can satisfy both the Air Force’s and AFSOC’s requirements or which could be easily and/or inexpensively upgraded by AFSOC to meet special operations-peculiar requirements. AFSOC has a sound electronic warfare acquisition strategy based on a need to eliminate operational and supportability deficiencies while maximizing commonality within its C-130 fleet. Because of budget constraints, however, USSOCOM funding decisions are undercutting AFSOC’s efforts to implement its Technology Roadmap. An opportunity now exists, however, to help free up some MFP-11 funds to permit AFSOC to continue implementing its electronic warfare strategy as outlined in the Technology Roadmap. We recommend that the Secretary of Defense direct the Secretary of the Air Force in procuring common items for its C-130 avionics modernization, to select items that, where feasible, address USSOCOM’s CAAP requirements or could be modified by USSOCOM to meet those requirements. We further recommend that the Secretary of Defense direct USSOCOM to use any resulting MFP-11 funds budgeted for but not spent on CAAP to address other electronic warfare deficiencies or to expand the CAAP program to other special operations forces aircraft. In comments on a draft of this report, the Department of Defense (DOD) partially concurred with both recommendations. With regard to our first recommendation, DOD stated that Air Force and USSOCOM requirements require harmonization in order to take advantage of commonality and economies of scale. DOD agreed to require the Air Force and USSOCOM to document their common requirements. While this action is a step in the right direction, Office of the Secretary of Defense-level direction may be necessary to ensure that appropriate common items for USSOCOM are procured by the Air Force. As for our second recommendation, DOD officials stated that any MFP-11 funds originally budgeted for CAAP but saved through commonality should be used to address documented electronic warfare deficiencies or to deploy CAAP on other special operations forces aircraft. We agree with DOD that savings to the CAAP program by using common items should be used to address electronic warfare deficiencies or for expansion of the CAAP program to other special operations forces aircraft. We have reworded our recommendation to reflect that agreement. DOD’s comments are reprinted in appendix I. To assess the basis for AFSOC’s strategy for acquiring and upgrading electronic warfare equipment and determine the extent to which it would address deficiencies and maximize commonality, we analyzed AFSOC acquisition plans and studies and reviewed classified test reports and threat documentation. We also discussed AFSOC’s current electronic warfare systems and aircraft and AFSOC’s planned electronic warfare upgrades and system acquisition with officials at USSOCOM, MacDill Air Force Base, Florida; AFSOC, Hurlburt Field, Florida; and Air Force Headquarters, Washington, D.C. Additionally, we discussed AFSOC electronic warfare system supportability with officials responsible for the systems at USSOCOM; AFSOC; and Warner Robins Air Logistics Center, Georgia, and reviewed logistics records for pertinent systems. We accepted logistics records provided by AFSOC as accurate without further validation. To identify alternative sources of funding to implement AFSOC’s strategy, we examined legislation establishing and affecting USSOCOM and memoranda of agreement between USSOCOM and the Air Force regarding research, development, acquisition, and sustainment programs. We discussed relevant memoranda of agreement with USSOCOM, AFSOC, and Air Force officials. Furthermore, we reviewed planning documents and discussed the planned Air Force C-130 avionics modernization program with Air Force officials at Air Force Headquarters and the Air Mobility Command, Scott Air Force Base, Illinois. We conducted our work from October 1997 through July 1998 in accordance with generally accepted government auditing standards. We will send copies of this report to interested congressional committees; the Secretaries of Defense and the Air Force; the Assistant Secretary of Defense, Office of Special Operations and Low-Intensity Conflict; the Commander, U.S. Special Operations Command; the Director, Office of Management and Budget; and other interested parties. Please contact me at (202) 512-4841 if you or your staff have any questions. Major contributors to this assignment were Tana Davis, Charles Ward, and John Warren. The Air Force Special Operations Command (AFSOC) uses specially modified and equipped variants of the C-130 Hercules aircraft to conduct and support special operations missions worldwide. Following are descriptions of the C-130 models. Mission: The AC-130H is a gunship with primary missions of close-air support, air interdiction, and armed reconnaissance. Additional missions include perimeter and point defense, escort, landing, drop and extraction zone support, forward air control, limited command and control, and combat search and rescue. Special equipment/features: These heavily armed aircraft incorporate side-firing weapons integrated with sophisticated sensor, navigation, and fire control systems to provide precision firepower or area saturation during extended periods, at night, and in adverse weather. The sensor suite consists of a low-light level television sensor and an infrared sensor. Radar and electronic sensors also give the gunship a method of positively identifying friendly ground forces and deliver ordnance effectively during adverse weather conditions. Navigational devices include an inertial navigation system and global positioning system. Mission: The AC-130U’s primary missions are nighttime, close-air support for special operations and conventional ground forces; air interdiction; armed reconnaissance; air base, perimeter, and point defense; land, water, and heliborne troop escort; drop, landing, and extraction zone support; forward air control; limited airborne command and control; and combat search and rescue support. Special equipment/features: The AC-130U has one 25-millimeter Gatling gun, one 40-millimeter cannon, and one 105-millimeter cannon for armament and is the newest addition to AFSOC’s fleet. This heavily armed aircraft incorporates side-firing weapons integrated with sophisticated sensor, navigation, and fire control systems to provide firepower or area saturation at night and in adverse weather. The sensor suite consists of an all light level television system and an infrared detection set. A multi-mode strike radar provides extreme long-range target detection and identification. The fire control system offers a dual target attack capability, whereby two targets up to 1 kilometer apart can be simultaneously engaged by two different sensors, using two different guns. Navigational devices include the inertial navigation system and global positioning system. The aircraft is pressurized, enabling it to fly at higher altitudes and allowing for greater range than the AC-130H. The AC-130U is also refuelable. Defensive systems include a countermeasures dispensing system that releases chaff and flares to counter radar-guided and infrared-guided anti-aircraft missiles. Also infrared heat shields mounted underneath the engines disperse and hide engine heat sources from infrared-guided anti-aircraft missiles. Command: Air National Guard Mission: EC-130E Commando Solo, the Air Force’s only airborne radio and television broadcast mission, is assigned to the 193rd Special Operations Wing, the only Air National Guard unit assigned to AFSOC. Commando Solo conducts psychological operations and civil affairs broadcasts. The EC-130E flies during either day or night scenarios and is air refuelable. Commando Solo provides an airborne broadcast platform for virtually any contingency, including state or national disasters or other emergencies. Secondary missions include command and control communications countermeasures and limited intelligence gathering. Special equipment/features: Highly specialized modifications include enhanced navigation systems, self-protection equipment, and the capability to broadcast color television on a multitude of worldwide standards. Commands: AFSOC, Air Force Reserve, and Air Education and Training Command Quantity: 14 Combat Talon Is, 24 Combat Talon IIs Mission: The mission of the Combat Talon I/II is to provide global, day, night, and adverse weather capability to airdrop and airland personnel and equipment in support of U.S. and allied special operations forces. The MC-130E also has a deep penetrating helicopter refueling role during special operations missions. Special equipment/features: These aircraft are equipped with in-flight refueling equipment, terrain-following/terrain-avoidance radar, an inertial and global positioning satellite navigation system, and a high-speed aerial delivery system. The special navigation and aerial delivery systems are used to locate small drop zones and deliver people or equipment with greater accuracy and at higher speeds than possible with a standard C-130. The aircraft is able to penetrate hostile airspace at low altitudes and crews are specially trained in night and adverse weather operations. Commands: Air Force Special Operations Command, Air Education and Training Command, and Air Force Reserve Mission: The MC-130P Combat Shadow flies clandestine or low visibility, low-level missions into politically sensitive or hostile territory to provide air refueling for special operations helicopters. The MC-130P primarily flies its single- or multi-ship missions at night to reduce detection and intercept by airborne threats. Secondary mission capabilities include airdrop of small special operations teams, small bundles, and rubber raiding craft; night-vision goggle takeoffs and landings; and tactical airborne radar approaches. Special equipment/features: When modifications are complete in fiscal year 1999, all MC-130P aircraft will feature improved navigation, communications, threat detection, and countermeasures systems. When fully modified, the Combat Shadow will have a fully integrated inertial navigation and global positioning system, and night-vision goggle-compatible interior and exterior lighting. It will also have a forward-looking infrared radar, missile and radar warning receivers, chaff and flare dispensers, and night-vision goggle-compatible heads-up display. In addition, it will have satellite and data burst communications, as well as in-flight refueling capability. The Combat Shadow can fly in the day against a reduced threat; however, crews normally fly night, low-level, air refueling and formation operations using night-vision goggles. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the U.S. Special OperationsCommand's (USSOCOM) acquisition strategy for aircraft electronic warfare systems, focusing on the: (1) fixed-wing C-130 aircraft operated by USSOCOM's Air Force Special Operations Command (AFSOC); (2) soundness of AFSOC's electronic warfare acquisition strategy; and (3) extent to which AFSOC is correcting deficiencies and maximizing commonality in its electronic warfare systems. GAO noted that: (1) AFSOC's electronic warfare acquisition strategy is sound because it is based on eliminating operational and supportability deficiencies confirmed by an Air Force study, test reports, and maintenance records; (2) this evidence indicates that AFSOC's current electronic warfare systems are unable to defeat many current threat systems and have supportability problems; (3) AFSOC's acquisition strategy is to procure a mix of new systems and upgrades for older ones while maximizing commonality within its fleet of C-130s; (4) amidst budget constraints, USSOCOM is funding only portions of AFSOC's acquisition strategy due to other higher budget priorities, thereby hampering AFSOC's efforts to correct deficiencies and maximize commonality in electronic warfare systems; (5) for example, although USSOCOM is funding an AFSOC effort to make C-130 aircraft less susceptible to passive detection, enhance aircrews' situational awareness, and increase commonality, it has rejected other requests to fund effectiveness and commonality improvements to systems dealing with radar- and infrared-guided missiles; (6) as a result, in the foreseeable future, deficiencies will continue, and AFSOC will have to operate and maintain older and upgraded electronic warfare systems concurrently; (7) an opportunity exists, however, to help AFSOC implement its electronic warfare acquisition strategy; (8) since AFSOC's acquisition strategy was adopted, the Air Force has decided to begin a-$4.3 billion C-130 modernization program (C-130X program) for all C-130s; (9) some of the planned elements of this modernization are common with some of the elements of AFSOC's acquisition strategy that was to be funded by USSOCOM's Major force program-11 (MFP) funds; and (10) if, as required by the memoranda of agreement, the Air Force C-130 avionics modernization program funds these common elements, USSOCOM could redirect significant portions of its MFP-11 funding currently budgeted for AFSOC C-130 passive detection and situational awareness deficiencies to other unfunded portions of AFSOC's electronic warfare acquisition strategy. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
In response to concerns about the lack of a coordinated federal approach to disaster relief, President Carter established FEMA by Executive Order in 1979 to consolidate and coordinate emergency management functions in one location. In 2003, FEMA became a component of the Emergency Preparedness and Response (EP&R) Directorate in the newly created DHS. Much like its FEMA predecessor, EP&R’s mission was to help the nation to prepare for, mitigate the effects of, respond to, and recover from disasters. While FEMA moved intact to DHS and most of its operations became part of the EP&R Directorate, some of its functions were moved to other organizations within DHS. In addition, functions that were formerly part of other agencies were incorporated into the new EP&R organization. After FEMA moved into DHS it was reorganized numerous times. FEMA’s preparedness functions were transferred over 2 years to other entities in DHS, reducing its mission responsibilities. However, recent legislation transferred many preparedness functions back to FEMA. Today, once again, FEMA’s charge is to lead the nation’s efforts to prepare for, protect against, respond to, recover from, and mitigate the risk of natural disasters, acts of terrorism, and other man-made disasters, including catastrophic incidents. The Robert T. Stafford Disaster Relief and Emergency Assistance Act (Stafford Act), establishes the process for states to request a presidential disaster declaration. The Stafford Act requires the governor of the affected state to request a declaration by the President. In this request the governor must affirm that the situation is of such severity and magnitude that effective response is beyond the capabilities of the state and the affected local governments and that federal assistance is necessary. Before a governor asks for disaster assistance, federal, state, and local officials normally conduct a joint preliminary damage assessment. FEMA is responsible for recommending to the President whether to declare a disaster and trigger the availability of funds as provided for in the Stafford Act. When an obviously severe or catastrophic event occurs, a disaster may be declared before the preliminary damage assessment is completed. In response to a governor’s request, the President may declare that a major disaster or emergency exists. This declaration activates numerous assistance programs from FEMA and may also trigger programs operated by other federal agencies, such as the Departments of Agriculture, Labor, Health and Human Services, and Housing and Urban Development, as well as the Small Business Administration to assist a state in its response and recovery efforts. FEMA can also issue task orders—called mission assignments—directing other federal agencies and DHS components, or “performing agencies,” to perform work on its behalf to respond to a major disaster. The federal disaster assistance provided under a major disaster declaration has no overall dollar limit. However, each of FEMA’s assistance programs has limits either in the form of federal-state cost share provisions or funding caps. FEMA provides assistance primarily through one or more of the following three grant programs: Public Assistance provides aid to state government agencies; local governments; Indian tribes, authorized tribal organizations, and Alaskan Native villages; and private nonprofit organizations or institutions that provide certain services otherwise performed by a government agency. Assistance is provided for projects such as debris removal, emergency protective measures to preserve life and property, and repair and replacement of damaged structures, such as buildings, utilities, roads and bridges, recreational facilities, and water-control facilities (e.g., dikes and levees). Individual Assistance provides for the necessary expenses and serious needs of disaster victims that cannot be met through insurance or low- interest Small Business Administration loans. FEMA provides temporary housing assistance to individuals whose homes are unlivable because of a disaster. Other available services include unemployment compensation and crisis counseling to help relieve any grieving, stress, or mental health problems caused or aggravated by the disaster or its aftermath. FEMA can cover a percentage of the medical, dental, and funeral expenses that are incurred as a result of a disaster. The Hazard Mitigation Grant Program provides additional funding (7.5 to 15 percent of total federal aid for recovery from the disaster) to states and Indian tribal governments to assist communities in implementing long- term measures to help reduce the potential risk of future damages to facilities. Not all programs are activated for every disaster. The determination to activate a program is based on the needs identified during the joint preliminary damage assessment. For instance, some declarations may provide only Individual Assistance grants and others only Public Assistance grants. Hazard Mitigation grants, on the other hand, are available for most declarations. Once a federal disaster is declared, the President appoints a federal coordinating officer to make an appraisal of the types of relief needed, coordinate the administration of this relief, and assist citizens and public officials in obtaining assistance. In addition, the federal coordinating officer establishes a joint field office at or near the disaster site. This office is generally staffed with a crew made up of permanent, full-time FEMA employees; a cadre of temporary reserve staff, also referred to as disaster assistance employees; and the state’s emergency management personnel. Public Law No. 110-28, the U.S. Troop Readiness, Veterans’ Care, Katrina Recovery, and Iraq Accountability Appropriations Act, 2007, directs us to review how FEMA develops its estimates of the funds needed to respond to any given disaster, as described in House Report No. 110-60. Accordingly, we addressed the following questions: (1) What is FEMA’s process for developing and refining its cost estimates for any given disaster? (2) From 2000 through 2006, how close have cost estimates been to the actual costs for noncatastrophic natural disasters? (3) Given the findings from the first two questions and our relevant past work, what steps has FEMA taken to learn from past experience and improve its management of disaster-related resources and what other opportunities exist? To address the first question, we examined FEMA policies, regulations, and other documents that govern its estimation processes. We interviewed senior staff from FEMA’s Office of the Chief Financial Officer, as well as headquarters and regional personnel responsible for FEMA’s disaster assistance programs (Public Assistance, Individual Assistance, and the Hazard Mitigation Grant Program). Although we looked at how the estimates from other federal, state, and local government and private nonprofit organizations feed into FEMA’s process, we did not review the estimating processes of these entities. Also, we did not review whether FEMA implemented its cost estimation processes as described. To address the second question, we compared FEMA’s cost estimates at various points in time (initial; 1, 2, 3, 6 months; and a year) to actual costs to determine when estimates reasonably predicted actual costs. FEMA officials defined “reasonable” as within 10 percent of actual costs. Although the total number of disaster declarations from 2000 through 2006 was 363, we focused on noncatastrophic, natural disasters. Two of the 363 disaster declarations were not natural—they were related to the terrorist attacks of 9/11—and another 14 were considered catastrophic. Of the remaining 347 disaster declarations, 83 (24 percent) had actual or close to actual costs—known as reconciled or closed, respectively—that could be compared to earlier estimates. None of these 83 disaster declarations occurred in 2005 or 2006. Although the analysis of these 83 disaster declarations is informative, it is not generalizable to all declarations as it does not represent the general population of disasters. Finally, to assess the reliability of FEMA’s estimate data, we reviewed the data FEMA officials provided and discussed data quality control procedures with them. We determined that the data were sufficiently reliable for purposes of this report. To address the third question of how FEMA has improved its management of disaster-related resources and identify other opportunities for improvement, we reviewed available policies, procedures, and training materials for staff involved in developing disaster cost estimates or the management of disaster-related resources. In addition, we reviewed our earlier work that identified areas for improvement and discussed FEMA’s related management issues with DHS’s Deputy Inspector General for Disaster Assistance Oversight. We interviewed staff in FEMA’s Office of the Chief Financial Officer and OMB to learn more about FEMA’s planning for annual and supplemental requests for disaster-related resources. Finally, the work we did to address questions one and two provided valuable insights on other opportunities for FEMA to improve its management of disaster-related resources. Once a major disaster has been declared, FEMA staff deployed to the joint field office, along with state and local officials and other relevant parties (e.g., private nonprofit organizations, other federal agencies, etc.), develop and refine cost estimates for each type of assistance authorized in the disaster declaration. According to FEMA officials, these estimates build upon and refine those contained in the preliminary damage assessment. They said that the estimates contained in the preliminary damage assessment are “rough” and are used primarily to ensure that the damage is of a severity and magnitude that the state requires federal assistance. FEMA officials said that while the joint field office is open FEMA program and financial management staff work on a continuing basis to refine these estimates. Staff provide these estimates to a disaster comptroller, who enters them into the Disaster Projection Report (DPR), which compiles and calculates the overall estimate. The disaster comptroller reports the estimates (via the DPR) to both the responsible regional office and the Disaster Relief Fund Oversight Branch within FEMA’s Office of the Chief Financial Officer. The first DPR is provided to these two entities within 1 week of the joint field office opening; updates are reported at least monthly or when large changes occur in the underlying estimates. However, regional office staff only enter updated estimates into the Disaster Financial Status Report (DFSR)—FEMA’s central database for disaster costs—on a monthly basis. After the joint field office is closed, the responsible regional office updates estimates for the given disaster along with all others within its jurisdiction. Regional office program staff (i.e., staff in Public Assistance, Individual Assistance, and the Hazard Mitigation Grant Program) provide updated estimates for all ongoing declared disasters for monthly DFSR reporting. How this information is entered into the DFSR database varies by region; in some regional offices program staff update the estimates for their programs’ costs (e.g., Public Assistance) directly into DFSR, whereas in other regional offices this function is performed by financial management staff, who collect and enter updated disaster estimate data from the program staff. Figure 1 illustrates FEMA’s disaster cost estimation process. FEMA’s overall estimate for any given disaster may cover programmatic and administrative costs in up to five different categories, and the methods for developing these underlying estimates vary. The overall cost estimate for any given disaster could include projected costs for Public Assistance, Individual Assistance, and Hazard Mitigation grants, depending on what type of assistance was authorized in the disaster declaration. In addition, the overall estimate may also cover projected costs for mission assignments—FEMA-issued tasks to other federal agencies or components within DHS, known as performing agencies—as well as administrative costs associated with operating the joint field office and administering disaster assistance. Our review focused on FEMA’s policies and procedures for developing these estimates, as described in related documents and by FEMA officials; we did not review whether these processes were implemented as described. Public Assistance officials said that initial estimates for their program are prepared by category of work and then refined for specific projects. Working with potential applicants following a disaster, program staff will develop overall estimates for Public Assistance costs for each category of emergency and permanent work, as authorized. Costs for Public Assistance are shared between the federal and state governments. The minimum federal share is 75 percent; the President can increase it to 90 percent when a disaster is so extraordinary that it meets or exceeds certain per capita disaster costs, and to 100 percent for emergency work in the initial days after the disaster irrespective of the per capita cost. Later, the overall estimate is refined to reflect the estimates for individual projects. The Public Assistance program uses many methods to develop these estimates. Common methods include time and materials estimates and competitively bid contracts. Public Assistance officials told us that they rely heavily on the applicants’ (state government agencies, local governments, etc.) prior experience and historical knowledge of costs for similar projects. For small projects (those estimated to cost less than $59,700 in fiscal year 2007, adjusted annually), applicants can develop the estimates themselves—FEMA later validates their accuracy through a sample—or they can ask FEMA to develop the estimates. According to a senior Public Assistance official, most applicants choose the latter option. For large projects (estimated to cost more than $59,700 in fiscal year 2007, adjusted annually), Public Assistance staff are responsible for working with applicants to develop project worksheets, which include cost estimates. According to senior program officials, Individual Assistance cost estimates depend on individuals’ needs. Using demographic, historical, and other data specific to the affected area, as well as a national average of costs, Individual Assistance staff project program costs. Depending on the type of Individual Assistance provided, estimates are refined as individuals register and qualify for certain types of assistance or as FEMA and the state negotiate and agree upon costs. For housing and other needs assistance—such as disaster-related medical, dental, and funeral costs— estimates are based on the number of registrations FEMA receives, the rate at which registrants are found eligible for assistance, and the type and amount of assistance for which they qualify. For fiscal year 2007, federal costs for housing assistance were limited to $28,600 per individual or household. This amount is adjusted annually. Other needs assistance is a cost-share program between the federal and state governments with the federal share set at 75 percent of costs. Disaster unemployment assistance is provided to those unemployed because of the disaster and not otherwise covered by regular unemployment insurance programs. The amount provided is based on state law for unemployment insurance in the state where the disaster occurred. The state identifies any need for crisis counseling services and FEMA works with the state mental health agency to develop the estimate for that. Individual Assistance officials also told us that although they set aside $5,000 for legal services FEMA is rarely billed for these services. Hazard Mitigation Grant Program costs are formulaic and based on a sliding scale. If a grantee (state or Indian tribal government) has a standard mitigation plan, the amount FEMA provides to the grantee is a statutorily set percentage of the estimated total amount provided under the major assistance programs. This percentage ranges from 7.5 to 15 percent and is inversely related to the total; that is, when overall assistance estimates are higher, the percentage available for Hazard Mitigation grants decreases. Costs for Hazard Mitigation grants are shared among the federal government, grantees, and applicants (e.g., local governments), with a federal share of up to 75 percent of the grant estimate. FEMA calculates and provides an estimate of Hazard Mitigation funding to grantees 3, 6, and 12 months after a disaster declaration. The 6- month figure is a guaranteed minimum. At 12 months FEMA “locks in” the amount of the 12-month estimate unless the 6-month minimum is greater. Cost estimates for mission assignments are developed jointly by FEMA staff and the performing agencies. Among the information included in a mission assignment are a description of work to be performed, a completion date for the work, an estimate of the dollar amount of the work to be performed, and authorizing signatures. Mission assignments may be issued for a variety of tasks, such as search and rescue missions or debris removal, depending on the performing agencies’ areas of expertise. The signed mission assignment document provides the basis for obligating FEMA’s funds. When federal agencies are tasked with directly providing emergency work and debris removal—known as direct federal assistance mission assignments—costs are shared in the same manner as Public Assistance grants. Estimates for FEMA’s administrative costs are developed by financial management staff in the joint field office. These costs are based on several factors including the number of staff deployed, salary costs, rent for office space, and travel expenses. Although estimates developed in the immediate aftermath of a major disaster are necessarily based on preliminary damage assessments, decision makers need accurate cost information in order to make informed budget choices. FEMA officials told us that by 3 months after a declaration the overall estimate of costs related to any given noncatastrophic natural disaster is usually reasonable, that is, within 10 percent of actual costs. However, as figure 2 illustrates, our analysis of the 83 noncatastrophic natural disaster declarations with actual or close to actual costs shows that on average 3-month estimates were within 23 percent of actual costs and the median difference was around 14 percent. Although the average (mean) difference did not achieve the 10 percent band until approximately 1 year, the median difference reached this band at 6 months. These results, however, cannot be generalized to disaster declarations for which all financial decisions have not been made since we were only able to compare estimates to actual costs for about one-quarter of the noncatastrophic natural disasters declared from 2000 through 2006. From 2000 through 2006, there were 347 noncatastrophic natural disasters. As of June 30, 2007, 83 of these (approximately 24 percent) had actual or near actual costs to which we could compare estimates, as figure 3 illustrates. Fourteen disasters were “reconciled,” meaning that all projects were completed and the FEMA-State Agreement was closed and 69 disasters were “closed,” meaning that financial decisions had been made but all projects were not completed. The rest of the disasters (264) were “programmatically open,” meaning financial decisions were not completed, eligible work remains, and estimates are subject to change. According to FEMA officials, it takes 4 to 5 years to complete all work for an “average” disaster. Time frames for the underlying assistance programs vary. For example, according to a FEMA official, Individual Assistance takes approximately 18 months and Public Assistance 3 years to complete all work. Projects using Hazard Mitigation grants are expected to last 4 years although they can be extended to 6 years. Accurate data permits decision makers to learn from previous experience—both in terms of estimating likely costs to the federal government and in managing disaster assistance programs. However, the way FEMA records disaster information, specifically the way in which it codes the disaster that occurred, inhibits rather than facilitates this learning process. The combination of a single-code limit to describe disasters, inconsistent coding of disasters with similar descriptions, and overlapping codes means that the data are not easily used to inform estimates and other analyses. Such issues mean that we could not compare estimated and actual costs by type of disaster. Moreover, they limit FEMA’s ability to learn from past disasters. Every disaster declaration is coded with an incident type to identify the nature of the disaster (e.g., earthquake, wildfire, etc.). As shown in table 1, there are 27 different incident codes in the DFSR database. We found problems with these data. First, the coding of incident type did not always match the description of the disaster. For example, 31 declarations are coded as tsunamis, but many of these are described—and should be coded—as something else. Second, each disaster declaration can be coded with only one incident type even though most descriptions list multiple types of incidents. We found declarations with similar descriptions coded differently—FEMA has no guidance on how to select the incident type code to be used from among the types of damage. For example, a number of declarations are described as “severe storms and flooding” or “severe storms, flooding, and tornadoes,” but sometimes these were coded as flooding, other times as severe storms, and still other times as tornadoes. Any coding system should be designed for the purpose it must serve. From the point of view of looking at the cause of damage (e.g., water, wind, etc.), many of the 27 incident codes track weather events but do not necessarily capture or elaborate on the type of information relevant to FEMA’s mission of providing disaster assistance. Moreover, they are not all mutually exclusive and thus some codes could be consolidated or eliminated. For example, coastal storms (C), hurricanes (H), and typhoons (J) might all be seen as describing similar events and therefore could be seen as candidates for consolidation. FEMA officials identified several ways in which FEMA takes past experience into account and uses historical data to inform its cost estimation processes for any given disaster. For example, Individual Assistance officials told us that they use demographic data (such as population size and average household income) and a national average of program costs to predict average costs for expected applicants. Furthermore, based on past experience, Individual Assistance officials adjust cost estimates for different points in time during the 60-day registration period. Individuals with greater need tend to apply within the first 30 days of the registration period, according to Individual Assistance officials. This is usually followed by a lull in registrations, then an increase in registrations prior to the close of the registration period. The Public Assistance program has compiled a list of average costs for materials and equipment, which is adjusted for geographic area. As noted earlier, the Public Assistance program also relies heavily on the past experience and historical knowledge of its applicants for the costs of similar projects. Staff within FEMA’s Office of the Chief Financial Officer also contribute to FEMA’s learning from past disasters. For example, in collecting and compiling estimates at the joint field office, the disaster comptroller may question certain estimated costs based on his or her past experience with similar disasters. Similarly, once these estimates are reported to the Disaster Relief Fund Oversight Branch, staff there will review the DPR and, based on their knowledge of and experience with past disasters, may question certain estimates and compare them to similar past disasters. Office of the Chief Financial Officer staff also have worked with others throughout FEMA to develop a model to predict costs for category 3 or higher hurricanes prior to and during landfall. Among other types of data, the model uses historical costs from comparable hurricanes to predict costs. Although the model is finished, it has not been fully tested; no category 3 or higher hurricanes have made landfall in the United States since it was developed. FEMA has taken several steps to improve its management of disaster- related resources. In the past few years, FEMA has undertaken efforts to professionalize and expand the responsibilities of its disaster comptroller cadre. For example, FEMA has developed and updated credentialing plans since 2002 in an attempt to ensure that comptrollers are properly trained. The agency has also combined the Disaster Comptroller and Finance/Administration Section Chief into one position to better manage financial activities at the joint field office. The Office of the Chief Financial Officer introduced the DPR—developed by the Disaster Relief Fund Oversight Branch—as a tool for comptrollers to standardize the formulation and reporting of disaster cost projections. At the time of our review, FEMA was converting six disaster comptrollers from temporary to permanent positions. Officials told us that they plan to place two comptrollers in headquarters to assist with operations in the Office of the Chief Financial Officer, and four in regional offices to provide a “CFO presence” and to have experienced comptrollers on hand to assist with disasters. FEMA has also taken steps to better prepare for disasters. According to FEMA officials, the agency is focusing on “leaning forward”—ensuring that it is in a state of readiness prior to, during, and immediately following a disaster. For example, FEMA officials told us that they pre-position supplies in an attempt to get needed supplies out more quickly during and after a disaster. Similarly, FEMA has negotiated and entered into a number of contingency contracts in an attempt to begin work sooner after a disaster occurs and to potentially save money in the future since costs are prenegotiated. According to FEMA officials, each disaster is unique, and because of this, FEMA “starts from scratch” in developing estimates for each disaster. Although each disaster may be unique, we believe that commonalities exist that would allow FEMA to better predict some costs and have identified a number of opportunities to further its learning and management of resources. FEMA officials told us that a number of factors can lead to changes in FEMA’s disaster cost estimates, some of which are beyond its control. For example, the President may amend the disaster declaration to authorize other types of assistance, revise the federal portion of the cost share for Public Assistance, or cover the addition of more counties. Also, hidden damage might be discovered, which would increase cost estimates. Fluctuations in estimates also may occur with events such as the determination of insurance coverage for individuals and public structures or higher-than-estimated bids to complete large projects (Public Assistance). Changes in the state or local government housing assistance strategies can also drive changes in costs. However, that these are beyond FEMA’s control does not mean FEMA has no way to improve its estimates. FEMA could conduct sensitivity analyses to understand the marginal effects of different cost drivers, such as the addition of counties to a declaration, revisions to the cost share, or the determination of insurance coverage, and to provide a range for the uncertainty created by these factors. We recently reported that as a best practice sensitivity analysis should be used in all cost estimates because all estimates have some uncertainty. Using its experiences from prior disasters, FEMA could analyze the underlying causes of changes in estimates. This could help FEMA develop and provide to policymakers an earlier and more realistic range around its point estimate. In addition, there are other areas where FEMA has greater control. FEMA could review the effect its own processes have on fluctuations in its disaster cost estimates and take actions to better mitigate these factors. For example, FEMA officials told us that mission assignments are generally overestimated but these are not corrected until the performing agencies bill FEMA. We previously reported that when FEMA tasks another federal agency with a mission assignment, FEMA records the entire amount up front as an obligation, but does not adjust this amount until it has received the bill from the performing agency, reviewed it, and recorded the expenditure in its accounting system. The performing agency might not bill FEMA until months after it actually performs the work. If upon reviewing supporting reimbursement documentation FEMA officials determine that some amounts are incorrect or unsupported, FEMA may retrieve or “charge back” the moneys from the agencies. In these instances, agencies may also take additional time to gather and provide additional supporting documentation. We made several recommendations aimed at improving FEMA’s mission assignment process and FEMA officials told us that they are reviewing the management of mission assignments. One official posited that overestimates of mission assignments could have caused the overall estimates to take longer than expected to reach the 10 percent band FEMA officials defined as a reasonable predictor of actual costs. If a review of the mission assignment process shows this to be the case, FEMA should take steps—such as working with performing agencies to develop more realistic mission assignment estimates up front and ensuring that these agencies provide FEMA with bills supported by proper documentation in a timely manner—to improve this process and lessen its effect on the overall estimates. If, however, the overestimation of mission assignments is not driving these changes, FEMA should focus on identifying what is and take appropriate actions to mitigate it. Another area that could warrant review is the determination of eligible costs for Public Assistance. For example, after Public Assistance projects are completed, FEMA sometimes adjusts costs during reconciliation to disallow ineligible costs or determine that other costs are eligible. Focusing on this issue earlier in the process might lead to a more accurate determination of costs eligible for reimbursement and so improve projections. FEMA could also expand its efforts to better consider past experience in developing estimates for new disasters. For example, in tracking incident types, FEMA could improve both the accuracy and the usefulness of the data for its analytic and predictive purposes. A review and revision of incident type codes to reflect the cause(s) of damage would tie the data and coding to their purposes. This could permit making comparisons among similar disasters to better inform and enhance both cost estimates and decision making. Also, FEMA could ensure that for past declarations in the DFSR database, as well as for future declarations, incident codes match the related descriptions and are consistently entered. This effort could be aided by revising the DFSR database to allow for multiple incident types for each declaration to better reflect what occurred. Other opportunities may also exist for the assistance programs. For example, in predicting costs for the Individual Assistance program, the usefulness of a national average should be examined. The substitution or addition of more geographically specific indicators might better predict applicant costs. In some ways, FEMA recognizes the value of using past experience to inform current estimates. For example, it draws upon the experience of its disaster comptrollers and staff in the Disaster Relief Fund Oversight Branch to question estimated costs. In addition, the aforementioned model to predict hurricane costs shows that FEMA recognizes that similar disasters may lead to similar costs, which can be analyzed and applied to better predict costs. According to FEMA officials, they are considering expanding the model to predict costs from other potentially catastrophic disasters, such as earthquakes. In the same vein, we believe that FEMA could expand upon this effort to better predict costs for other types of disasters, particularly those that are noncatastrophic and recur more frequently. FEMA’s opportunities to learn from past experience, especially from its disaster cost data, could be hampered by some costs that are no longer distributed to individual disaster declarations. FEMA officials told us that they use a “surge account” to support federal mobilization, deployment, and preliminary damage assessment activities prior to a disaster declaration. FEMA records subsequent costs by declaration. In the past these surge account costs were distributed on a proportional basis to each disaster declared in the year—so the data for the 83 disaster declarations we were able to review do include these costs. However, FEMA no longer does this. FEMA officials told us that they determined that there was no obvious benefit to distributing surge account costs to subsequent declarations, especially in potential hurricane events that might result in multiple declarations. We note that costs in the surge account have increased significantly in recent years. For fiscal years 2000 through 2003, annual obligations in the surge account were less than $20 million each year; after 2004 they increased to over $100 million each year, according to FEMA data as of June 30, 2007. In fact by that date surge account costs for fiscal year 2007—three-quarters through the fiscal year—had already reached $350 million. No longer distributing these costs to disasters poses an analytical challenge for FEMA’s learning as costs for current and future disasters are not comparable to those that occurred in the past. To improve data reliability, FEMA could also develop standard operating procedures and training for staff entering and maintaining disaster estimate data in the DFSR database. In a recent review of FEMA’s day-to- day operations we found that it does not have a coordinated or strategic approach to training and development programs. Further, FEMA officials described succession planning as nonexistent and several cited it as the agency’s weakest link. We have previously reported that succession planning—a process by which organizations identify, develop, and select their people to ensure an ongoing supply of successors who are the right people, with the right skills, at the right time for leadership and other key positions—is especially important for organizations that are undergoing change. Like the rest of the government, FEMA faces the possibility of losing a significant percentage of staff—especially at the managerial and leadership levels—to retirement. About a third of FEMA’s Senior Executive Service and GS-15 leaders were eligible to retire in fiscal year 2005, and Office of Personnel Management data project that this percentage will increase to over half by the end of fiscal year 2010. Since FEMA relies heavily on the experience of its staff, such a loss could significantly affect its operations. Furthermore, according to FEMA officials with whom we met, there are no standard operating procedures or training courses for staff who are involved in entering and maintaining disaster cost estimate data in the DFSR database that would help mitigate this loss of knowledge and ensure consistency among staff in regional offices and in headquarters. Standard operating procedures also might reduce the coding errors described earlier. FEMA may be able to improve its management of disaster-related resources by reviewing the reasons why “older” disaster declarations remain open and take action to close and reconcile them if possible. By finalizing decisions about how much funding is actually needed to complete work for these open declarations, FEMA will be better able to target its remaining resources. FEMA officials told us that it takes 4 to 5 years to obligate all funding related to an average disaster declaration but we found the average life cycle to be longer—a majority of the noncatastrophic natural disasters declared from 2000 through 2002 (5 to 7 years old) are still open (see table 2). We previously reported that in November 1997, FEMA’s Director chartered three teams of Office of Financial Management staff—referred to as closeout teams—to assist FEMA regional staff and state emergency management personnel in closing out funding activities for all past disasters. Their primary goal was to eliminate remaining costs for these disasters by obligating or recovering funds. We found that these teams were effective in doing so. According to FEMA officials, the closeout teams no longer formally exist because they had successfully closed out funding activities for past disasters. FEMA now relies on regional offices to perform this function, and several use teams similar to the closeout teams to undertake this work. Given its mission, FEMA tends to focus much of its resources on disaster response and recovery. For example, as we previously reported, all FEMA employees are expected to be on call during disaster response and no FEMA personnel are exclusively assigned to its day-to-day operations. Indeed, FEMA officials have said that what FEMA staff label “nondisaster” programs are maintained on an ad hoc basis when permanent staff are deployed, and the agency does not have provisions for continuing programs when program managers are called to response duties. Without an understanding of who holds a mission-critical position for day-to-day operations and what minimum level of staffing is necessary even during disaster response, business continuity and support for the disaster-relief mission are put at increased risk. FEMA staff’s strong sense of mission is no substitute for a plan and strategies of action. It is likely, therefore, that the tasks necessary to close disasters become subordinated to responding to new disasters. This contributes to a situation in which disaster declarations remain open for a number of years. However, closing and reconciling declarations is not merely a bookkeeping exercise. Given the multiple claims on federal resources, it is important to provide decision makers with the best information possible about current and pending claims on those resources. FEMA’s annual budget requests and appropriations for disaster relief are understated because they exclude certain costs. Currently, annual budget estimates are based on a 5-year historical average of obligations, excluding costs associated with catastrophic disaster declarations (i.e., those greater than $500 million). This average—which serves as a proxy for an estimate of resources that will be needed for the upcoming year—presumes to capture all projected costs expected not only from future disasters but also those previously declared. However, as demonstrated by FEMA’s receipt of supplemental appropriations in years when no catastrophic disasters occurred, it does not do so. Excluding certain costs associated with previously declared catastrophic disasters results in an underestimation of annual disaster relief costs for two reasons. First, because FEMA finances disaster relief activities from only one account— regardless of the severity of the disaster—the 5-year average as currently calculated is not sufficient to cover known costs from past catastrophic disasters. Second, from fiscal years 2000 through 2006, catastrophic disasters occurred in 4 out of 7 years, raising questions about the relative infrequency of such events. Excluding costs from catastrophic disasters in annual funding estimates prevents decision makers from receiving a comprehensive view of overall funding claims and trade-offs. This is particularly important given the tight resource constraints facing our nation. Therefore, annual budget requests for disaster relief may be improved by including known costs from previous disasters and some costs associated with catastrophic disasters. Funding for natural disasters is not the only area where a reexamination of the distribution between funding through regular appropriations and funding through supplemental appropriations might be in order. In our work on funding the Global War on Terrorism (GWOT), we also noted that the line between what is funded through regular, annual appropriations and supplemental appropriations has become blurred. The Department of Defense’s GWOT funding guidance has resulted in billions of dollars being added for what DOD calls the “longer war against terror,” making it difficult to distinguish between base costs and the incremental costs to support specific contingency operations. Given FEMA’s mission to lead the nation in mitigating, responding to, and recovering from major domestic disasters, many individuals as well as state and local governments rely on the disaster assistance it provides. The cost estimates FEMA develops in response to a disaster have an effect not only on the assistance provided to those affected by the disaster but also on federal decision makers, as supplemental appropriations will likely be needed. As such, it is imperative for FEMA to develop accurate cost estimates in a timely manner to inform decision making, enhance trade-off decisions, and increase the transparency of these federal commitments. We were able to identify ways in which FEMA has learned from past disasters; however a number of opportunities exist for FEMA to continue this learning and to improve its cost estimation process. For example, FEMA could better ensure that incident codes are useful and accurate. In addition, a number of factors can lead to revisions in its estimates but FEMA can mitigate these factors by conducting sensitivity analyses and reviewing its estimation processes to identify where improvements could be made. To further facilitate learning, FEMA needs to better ensure that it has timely and accurate data from past disasters and this report suggests several ways in which FEMA could do so. FEMA can also explore refining its learning, for example, by using geographically specific averages to complement the national averages it uses. In addition, to facilitate analysis by making current disaster cost data comparable to past disaster data, FEMA could resume distribution of surge account costs to disasters, as appropriate. FEMA has also taken steps to improve its management of disaster-related resources, such as “leaning forward,” professionalizing and expanding the responsibilities of its disaster comptroller cadre, and developing a model to predict costs for category 3 or higher hurricanes prior to and during landfall. However, additional steps would further improve how FEMA manages its resources. For example, to improve data reliability FEMA could develop standard operating procedures and training for staff entering and maintaining disaster estimate data in the DFSR database. Also, although FEMA officials told us that it takes 4 to 5 years to finish all work related to an average disaster, our analysis of FEMA’s data shows that a majority of disasters declared from 2000 through 2002 were still open—that is they had work ongoing—during our review. In the past FEMA formed teams to review these “older” disasters, which resulted in the elimination of remaining costs for these disasters by obligating or recovering funds. A similar effort today could have the same effect. Also, FEMA relies on supplemental appropriations both to cover the costs of providing assistance for new disasters and known costs from past disasters. To promote transparency in the budget process and to better inform decision making, annual budget requests for disaster relief should cover these known costs, including some from catastrophic disasters. To better mitigate the effect of factors both beyond and within FEMA’s control to improve the information provided to decision makers; to better inform future estimates, including the ability to incorporate past experience in those estimates; and to improve the management of FEMA’s disaster-related resources, the Secretary of Homeland Security should instruct FEMA’s Administrator to take the following nine actions: Conduct sensitivity analyses to determine the marginal effects of key cost drivers to provide a range for the uncertainty created by factors beyond FEMA’s control. Review the effect FEMA’s own processes have on fluctuations in disaster cost estimates and take steps to limit the impact they have on estimates. Review the reasons why it takes 6 months or more for estimates to reasonably predict actual costs and focus on improving them to shorten the time frame. Undertake efforts—similar to those FEMA used to develop its model to predict hurricane costs—to better predict costs for other types of disasters, informed by historical costs and other data. Evaluate the benefits of using geographically specific averages in addition to national averages to better project Individual Assistance costs. Resume the distribution of surge account costs to individual disasters, as appropriate, to make cost data from past, current, and future disasters comparable. Review and revise incident coding types to ensure that they are accurate and useful for learning from past experience. At a minimum, incident codes should match the descriptions and be consistently entered and reflect what occurred, which may require permitting multiple incident types for each declaration. Develop training and standard operating procedures for all staff entering incident type and cost information into the DFSR database. Review reasons why “older” disasters remain open and take action to close/reconcile them if possible. To promote a more informed debate about budget priorities and trade-offs, the Secretary of Homeland Security also should instruct FEMA’s Administrator to work with OMB and Congress to provide more complete information on known costs from prior disasters and costs associated with catastrophic disasters as part of the annual budget request. We requested comments on a draft of this report from the Secretary of Homeland Security. In its comments, DHS generally agreed with eight of our ten recommendations. It stated it would take our recommendation to conduct sensitivity analyses to determine the marginal effects of key cost drivers under advisement and did not comment on our recommendation that it work with OMB and Congress to provide more complete information as a part of its annual budget requests. FEMA also provided technical comments, which we have incorporated as appropriate. We are sending copies of this report to the Director of OMB, the Secretary of Homeland Security, the Administrator of FEMA, and interested congressional committees. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report please contact me at (202) 512-9142 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who contributed to this report are acknowledged in appendix I. In addition to the individual listed above, Carol Henn, Assistant Director; Benjamin T. Licht; and Kisha Clark made significant contributions to this report. Pedro Briones, John Brooks, Stanley Czerwinski, Peter Del Toro, Carlos Diz, Gabrielle Fagan, Chelsa Gurkin, Elizabeth Hosler, William Jenkins, Casey Keplinger, Tracey King, Latesha Love, James McTigue, Jr., Tiffany Mostert, John Vocino, Katherine Hudson Walker, Greg Wilmoth, and Robert Yetvin also made key contributions to this report. | Public Law No. 110-28 directed GAO to review how the Federal Emergency Management Agency (FEMA) develops its disaster cost estimates. Accordingly, GAO addressed the following questions: (1) What is FEMA's process for developing and refining its cost estimates for any given disaster? (2) From 2000 through 2006, how close have cost estimates been to the actual costs for noncatastrophic (i.e., federal costs under $500 million) natural disasters? (3) What steps has FEMA taken to learn from past experience and improve its management of disaster-related resources and what other opportunities exist? To accomplish this, GAO reviewed relevant FEMA documents and interviewed key officials. GAO also obtained and analyzed disaster cost data and determined that they were sufficiently reliable for the purposes of this review. After a disaster is declared, FEMA staff deployed to a joint field office work with state and local government officials and other relevant parties to develop and refine cost estimates. The overall estimate comprises individual estimates for FEMA's assistance programs plus any related tasks assigned to other federal agencies (mission assignments) and FEMA administrative costs. The methods used to develop these estimates differ depending on program requirements including, in some cases, historical knowledge. FEMA officials told GAO that cost estimates are updated on a continuing basis. Decision makers need accurate information to make informed choices and learn from past experience. FEMA officials stated that by 3 months after a declaration estimates are usually within 10 percent of actual costs--which they defined as reasonable. GAO's analysis showed that decision makers did not have cost information within this 10 percent band until 6 months after the disaster declaration. These results cannot be generalized since this comparison could only be made for the 83 (24 percent) noncatastrophic natural disaster declarations for which final financial decisions had been made. Disaster coding issues also hamper FEMA's ability to learn from past experience. For example, in several instances the code for the incident type and the description of the disaster declaration did not match. Officials described several ways in which FEMA has learned from past disasters and improved its management of disaster-related resources. For example, FEMA uses a national average to predict costs for expected applicants for Individual Assistance. FEMA has also taken several actions to professionalize and expand the responsibilities of its disaster comptrollers. Nonetheless, FEMA could further learn from past experience by conducting sensitivity analyses to identify the marginal effect various factors have on causing fluctuations in its estimates. FEMA could improve its management of disaster-related resources by developing standard procedures for staff involved in entering and updating cost estimate data in its database. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Social Security is largely a pay-as-you-go, defined benefit system under which taxes collected from current workers are used to pay the benefits of current retirees. Social Security is financed primarily by a payroll tax of 12.4 percent on annual wages up to $72,600 (in 1999) split evenly between employees and employers or paid in full by the self-employed. Since 1940, Social Security has been providing benefits to the nation’s eligible retired workers, their dependents, and the survivors of deceased workers. In addition, since 1956, the program has provided income protection for disabled workers and their eligible dependents. Today, the Social Security program covers over 145 million working Americans—96 percent of the workforce. It is the foundation of the nation’s retirement income system and an important provider of disability benefits. Currently, 44 million individuals receive Social Security benefits. Social Security retirement benefits are calculated using the worker’s 35 years of highest earnings in covered employment. However, benefits are not strictly proportional to earnings. A progressive benefit formula is applied so that low-wage workers receive, as a monthly benefit, a larger percentage of their average monthly lifetime earnings than do high-wage workers. The benefit is adjusted for the age at which the worker first begins to draw benefits. To receive Social Security retirement benefits, employees must be at least 62 years old and have earned a certain number of credits for work covered by Social Security. Retirees are eligible for full benefits at age 65—the normal retirement age—and those retiring at 62 currently receive 80 percent of their full benefit. The age for full benefit eligibility is scheduled to incrementally increase to age 67 for those born between 1938 and 1960. Since 1975, benefits have been automatically adjusted each year to compensate for increases in the cost of living. Additionally, benefits are adjusted when recipients aged 62 through 69 have earnings above a certain threshold. Individuals may be eligible for Social Security benefits on the basis of their spouses’ earnings. For example, a married person who does not qualify for Social Security retirement benefits may be eligible for a spousal benefit that is worth up to 50 percent of the primary earner’s retirement benefit. Spouses who do qualify for their own Social Security retirement benefit but whose retirement benefit is worth less than 50 percent of the primary earner’s benefit are eligible for both their own retirement and certain spousal benefits. Specifically, benefits for such dually eligible individuals are calculated so that their retirement benefit and their spousal benefit could add up to 50 percent of the primary earner’s benefit. In practice, spouses receive either the value of their individual benefit or the value equivalent to 50 percent of the primary earner’s benefit, whichever is higher. Under Social Security, retirement benefits can be paid to ex-spouses if they were married to the worker for at least 10 years, are not remarried, and are at least 62 years old. A deceased worker’s survivors are eligible for benefits if the survivor is a spouse at least 60 years old or a disabled spouse at least age 50, a parent caring for an eligible child under age 16, an eligible child under the age of 18, or a dependent parent. Ex-spouses are eligible for survivor benefits if they do not remarry before age 60 and meet other qualifications for surviving spouses. Social Security’s Disability Insurance program provides cash benefits to disabled workers and their dependents. To qualify for disability benefits, the worker must be unable to engage in any substantial gainful activity because of a medically determinable physical or mental impairment that is expected to result in death or to last for a continuous period of at least 12 months. Disability benefits are available after a 5-month waiting period beginning at the onset of the disability. To be eligible, the employee, if over age 30, must have worked in Social Security-covered employment for at least 20 of the 40 quarters immediately preceding the disability’s onset. If under 31, the disabled worker must have had earnings in at least one-half the quarters worked after he or she reached age 21, with a minimum of six quarters. Disabled worker benefits are automatically converted to retired worker benefits when the disabled worker reaches the normal retirement age. Workers for state and local governments were originally excluded from Social Security because many were already covered by a state or local government pension plan, and the federal government’s constitutional right to impose a tax on state and local governments was uncertain. In the 1950s, the Social Security Act was amended to allow state and local governments the option of covering their employees. Those state and local governments that elected coverage were allowed to opt out later if certain conditions were met. However, the Congress amended the Social Security Act in 1983 to prohibit state and local governments from opting out of the program once they joined. In 1981, Galveston County officials, citing expected future increases in the Social Security tax rate and wage base, notified the Social Security Administration of the County’s intent to withdraw from the program. County employees voted two to one in support of withdrawal. The neighboring counties of Brazoria and Matagorda followed Galveston’s lead and also withdrew from Social Security. Rather than simply eliminate the Social Security payroll taxes and the coverage provided, the three Texas counties continued to collect these amounts to create the Alternate Plans—deferred compensation plans that include retirement, disability, and survivor insurance benefits. The Alternate Plans are designed to replicate many of the features found in the Social Security program. Creators of the Alternate Plans, however, wanted to replace Social Security’s benefits package with one that offered potentially higher returns, while still providing a high level of benefit security. Today, about 3,000 employees of the three Texas counties are covered by these plans. While Social Security and the Alternate Plans offer a similar package of benefits, there are a number of important differences between the two approaches in the calculation of benefits and scope of coverage. The Alternate Plans’ benefits are advance funded, while Social Security’s promised benefits are not. As a defined benefit plan, Social Security calculates benefits by formula, whereas the Alternate Plans—defined contribution plans—determine benefits largely by the accumulations in the beneficiary’s retirement account. Retirement benefits under the Alternate Plans are thus based on contributions and investment returns and are not adjusted to provide proportionately larger benefits to low-income workers, as is the case with Social Security. Survivor benefits under the Alternate Plans are not lifetime benefits, but a one-time life insurance payment made to the worker’s designated beneficiaries, along with the worker’s account balance; there are no additional benefits for dependents. Disability benefits under the Alternate Plans are equal to 60 percent of the employee’s wage at the time of disability, up to a maximum benefit of $5,000 a month. Workers are eligible to receive the value of the employee’s account at the time he or she becomes disabled. At that time, a new retirement account is established that pays an amount equivalent to the employee and employer’s contributions at that time. The Alternate Plans’ disability benefits make no allowances for dependents. Social Security’s disability benefits are based on a modified benefit formula and include additional benefits for the dependents of disabled workers. As is the case with Social Security, the Alternate Plans are funded by payroll taxes collected from employers and employees. Galveston County employees, for example, contribute 6.13 percent of their gross earnings toward their deferred compensation account. The County contributes 7.785 percent of a worker’s gross compensation. Total contributions to the Alternate Plans in Galveston County today are 13.915 percent—somewhat higher than the 12.4 percent contributed by employers and employees to Social Security. A portion of the County’s contribution goes to pay for the employee’s life and disability insurance premiums (4.178 percent in 1998). The Alternate Plans were designed to give the employees a guaranteed nominal annual return on their contributions of at least 4 percent. Therefore, the Alternate Plans’ managers contracted with an insurance company to purchase an annuity that guaranteed the minimum return. The portfolios holding the plans’ contributions are invested only in fixed-rate marketable securities (government bonds, corporate bonds, and preferred stocks) and bank certificates of deposit. Rates of return on the portfolios for all of the Alternate Plans have ranged widely over the years but currently are around 6 percent in nominal terms. Social Security, on the other hand, is mostly a pay-as-you-go program, but when revenues exceed outlays, as they currently do, the surplus is credited to the Trust Funds in the form of nonmarketable Treasury securities. The funds earn interest but, unlike the Alternate Plans, the interest income does not influence the amount of Social Security benefits paid to retirees. Because virtually all work in the United States is covered by Social Security, benefits are fully portable if the worker changes jobs. If participants in the Alternate Plans leave county employment, they can either take their account balances with them or leave the account, which will continue to earn the portfolio’s rate of return. The Alternate Plans are tax-deferred plans, so if the employee elects to cash out the account, he or she must pay income taxes on the proceeds, although there is no penalty involved. All distributions of deferred compensation accounts are taxed at the employee’s marginal tax rate at the time of distribution. Social Security income is not taxed as long as an individual’s income does not exceed certain thresholds. There are also a number of significant differences in how retirement income benefits are determined under the two approaches. Because Social Security is a defined benefit plan, it calculates benefits by formula. The Alternate Plans are defined contribution plans, so benefits are directly related to the capital accumulations in the beneficiaries’ retirement accounts. In addition, retirement benefits are available at younger ages under the Alternate Plans than under Social Security. Moreover, unlike Social Security retirement benefits, which are based on the 35 years of highest covered earnings and weighted to replace a larger share of a low earner’s wages, retirement income benefits under the Alternate Plans depend solely on contributions to the individual’s account and the earnings on the plans’ investments. Also, Social Security provides a separate spousal benefit, and the Alternate Plans do not. (See table 1.) The Alternate Plans do not ensure the preservation of retirement benefits. While Social Security provides retirees with a lifetime annuity, the Alternate Plans allow retiring employees to choose between taking a lump sum payout or purchasing an annuity with one of several different payout options. If the worker chooses to receive income from the plan over his or her remaining lifetime or over that of a spouse, he or she must purchase either an individual annuity or a “joint and survivor” annuity. But annuities generally are not inflation-protected as they are under Social Security, so the purchasing power of this retirement income could decline over time. To protect against future inflation, the retiree can arrange to schedule the annuity payouts so that they are higher in the later years, but this means accepting smaller benefits in the early years. In 1998, the plan for Brazoria County was modified to allow employees to place their share of the contributions in equity funds. It is too soon to judge how this change would affect our comparisons. Unlike Social Security, the Alternate Plans’ survivor benefits can be a one-time payment or a series of payments over a finite period of time. Under the Alternate Plans, if an employee dies, the surviving beneficiary (anyone named as beneficiary by the worker) receives the value of the employee’s account at the time of death, plus a life insurance benefit. The life insurance benefit for a beneficiary of an employee who dies while under age 70 is 300 percent of the deceased worker’s salary, with a minimum benefit of $50,000 and a maximum of $150,000. Beneficiaries of employees who die between the ages of 70 and 74 are entitled to insurance proceeds up to 200 percent of the covered employee’s annual earnings, with a minimum of $33,330 and a maximum of $100,000. Beneficiaries of employees who die at age 75 or older are entitled to 130 percent of the employee’s annual earnings, with a minimum of $21,665 and a maximum of $65,000. These lump sum payments can be used by the beneficiary to purchase a lifetime annuity. Social Security survivor benefits, on the other hand, are based on the worker’s benefit at the time of death, adjusted for the number of beneficiaries. The benefit is paid as an annuity, not a lump sum distribution, and is paid generally to surviving spouses who are 60 years old or older or who have dependent children. (See table 2.) Under the Alternate Plans, workers are considered to be disabled if they cannot work in their occupation for at least 24 months. Social Security, in contrast, requires that the individual not be able to perform any substantial gainful activity because of a physical or mental impairment for at least 12 months to qualify for benefits. After an initial 180-day waiting period, the Alternate Plans’ disability insurance pays 60 percent of an individual’s base salary until age 65 or until the individual returns to work. The amounts provided by Social Security’s disability insurance vary, but they follow the same formula as retirement benefits. Of the first $505 of monthly earnings, 90 percent is replaced, but the replacement rate falls off rapidly after that. Only 32 percent of monthly earnings between $505 and $3,043 are replaced, and only 15 percent of earnings above $3,043 are replaced. Few disabled workers who do not have dependents, therefore, would receive as much as 60 percent of their wage or salary. A totally disabled employee can receive a minimum monthly benefit payment of $100 under the Alternate Plans, up to a maximum benefit of $5,000 a month. At the time the worker ceases employment because of a disability, he or she can purchase an annuity with the account balance. A separate account is then set up by the disability insurance provider, and the insurer pays an amount into that account equivalent to the employer and employee contributions at the time the employee stopped working. Payments are made until the employee reaches age 65. Unlike Social Security, the Alternate Plans provide no dependent benefits. (See table 3.) Our comparisons of retirement, survivor, and disability benefits under the two approaches show that outcomes generally depend on individual circumstances and conditions. For example, certain features of Social Security, such as the tilt in the benefit formula and the allowance for spousal benefits, are important factors in providing larger benefits than the Alternate Plans for low-wage earners, single-earner couples, and those with dependents. The Social Security benefit formula replaces a larger share of the wages of a low earner than of a high earner. As a result, low-wage earners with relatively shorter careers in the three Texas counties would have received larger initial benefits from Social Security than from the Alternate Plans. Social Security benefits also are adjusted for inflation so their purchasing power is stable over time. Thus, the longer the period of retirement, the more likely it is that Social Security will provide higher monthly benefits than a fixed annuity purchased with the proceeds from the Alternate Plans. The Social Security spousal benefit also can significantly raise the retirement incomes of couples when one partner had little or no earnings. Under the Alternate Plans, workers have assets that they may pass on to designated beneficiaries. Conversely, a worker has no assets from Social Security to bequeath to his or her heirs. Finally, the fact that Social Security takes into account the number of dependents in calculating survivor and disability benefits means that individual family circumstances will be important in determining whether Social Security or the Alternate Plans provides larger benefits. Our simulations comparing the retirement benefits for employees of the three Texas counties show that the benefits from Social Security and the Alternate Plans depend on the employee’s earnings, the number of years in the program, the presence of a spouse, the length of time in retirement, and the year the worker retires. In general, low-wage workers and, to a lesser extent, median-wage earners would fare better under Social Security. High-wage earners can generally expect to do better under the Alternate Plans, although if spousal benefits are included, even high-wage workers could eventually receive higher retirement income benefits from Social Security. Low-wage workers retiring at 65 today after a 35-year career in county employment would receive a higher initial monthly benefit under Social Security. If the family is eligible for a Social Security spousal benefit or if a joint and survivor annuity is elected under the Alternate Plans, the gap increases. Social Security provides a spousal benefit of up to 50 percent of a worker’s benefit (for a spouse with a record of little or no earnings of his or her own), while the Alternate Plans’ spousal coverage through the purchase of a joint and survivor annuity actually reduces the couple’s monthly income. Low-wage earners with 35-year careers retiring in 2016 are projected to receive roughly the same individual initial monthly benefits under Social Security and the Alternate Plans. The Alternate Plans’ benefits are relatively better for those retiring in the future than for those retiring today because earnings on the plans’ investments were relatively low in the ’60s and early ’70s as compared with the ’90s. (See table 4.) Nevertheless, because Social Security benefits are indexed for inflation, they would grow larger over time and would eventually exceed the retirement benefits from the Alternate Plans, as the latter remained constant. (See figs. 1 and 2). The picture for low-wage workers changes somewhat if a 45-year career is assumed. Because all contributions and the investment earnings on them determine the size of an Alternate Plan account, more years of earnings in jobs covered by Alternate Plans lead to higher account balances and, therefore, higher monthly benefits from the annuity. Social Security benefits, by contrast, are based on a formula using the 35 years of highest earnings from all jobs. With the longer work history, initial individual benefits for low-wage workers would be higher under the Alternate Plans than under Social Security, although, if spousal benefits and joint and survivor annuities were considered, Social Security benefits would again be larger. (See table 5.) Even the higher individual benefits would not be permanent, as indexation would ultimately close the gap. For low-wage workers retiring in 2008, however, the gap would be closed in 4 years, while for those retiring in 2026, the gap would be closed in 9 years. Thereafter, Social Security monthly benefits would be higher. (See figs. 3 and 4.) For median-wage earners, Social Security initial benefits are higher when spousal benefits are included. Individual benefits—even when they start out lower—eventually catch up to the Alternate Plans’ benefits, but it does take longer for median-wage earners than for low-wage earners. After 7 years of retirement Social Security benefits would catch up to Alternate Plan benefits for median-wage earners retiring in 2008 after a 45-year career with the county assuming Social Security was indexed at 3.5 percent. For those with 45-year careers retiring in 2026, it would take about 13 years for Social Security individual retirement benefits to overtake those of the Alternate Plans. High-income workers, in general, would probably do better under the Alternate Plans, although consideration of spousal benefits or coverage also could lead to higher benefits under Social Security through indexation of benefits—at least for those with 35-year careers. We used 35- and 45-year work histories to approximate working careers. We recognize that many people have shorter or less continuous careers. For example, in 1993 the average 62-year-old woman spent only 25 years in the workforce, compared with 36 years for the average 62-year-old man. Both men and women leave the workforce temporarily for a variety of reasons, such as to return to school or to raise children. Fewer years and less continuity would influence the pattern of benefits under both plans. We simulated outcomes for workers who left the labor force for either 5 or 10 years early in their careers (at age 25). Under both Social Security and the Alternate Plans, retirement benefits were reduced. However, the reduction was larger under the Alternate Plans because the size of the accounts at retirement is sensitive to when the contributions are made. Monies not contributed early in the worker’s career lose the benefits from compounding, leading to a significantly lower account balance at retirement. Social Security benefits are also reduced, but because they are based on the earners’ 35 years of highest income and are not affected by compounding, the impact on retirement income is less. This simulation shows that the relative “superiority” of the two approaches depends on individual circumstances. These simulations are not meant to portray a “typical” worker, but rather to demonstrate the importance of particular factors in determining relative benefits from the two approaches. For example, currently only about 7 percent of Social Security benefits are spousal benefits, and that percentage is expected to decline over time as more women become eligible for benefits on the basis of their own earnings. It is also true that Social Security benefits are reduced on the death of the retired worker, while the joint and survivor annuity under the Alternate Plans could be structured to provide constant benefits. Nonetheless, for some county workers Social Security retirement benefits would probably have exceeded those available from the Alternate Plans. With respect to survivor benefits, our simulations indicated that, in cases in which the surviving spouse was left with two dependent children under age 16, benefits would usually be higher under Social Security because Social Security takes the number of dependents into account when computing the total family monthly benefit. For example, if a low-wage worker died at age 45, our simulations indicate a surviving spouse with two dependent children would receive $1,602 per month, while under the Alternate Plans, the family would receive only $831 per month on the basis of annuitizing lump sum benefits. (See table 6.) On the other hand, if there were no dependent children, the surviving spouse would not be eligible for survivor benefits under Social Security until age 60, whereas under the Alternate Plans, the surviving spouse would immediately be eligible to receive three times the worker’s salary plus any dollar amounts in the worker’s retirement income account. The Alternate Plans’ survivor benefits would also be higher in cases in which the worker died late in his or her career. The survivor of a low-wage worker who died at age 60 with no dependents would receive $1,013 per month under Social Security, whereas the survivor could receive a lifetime monthly benefit of $1,494 under the Alternate Plans if he or she chose to use the proceeds to buy an annuity. Again, in about a dozen years, increases in benefits due to cost-of-living adjustments would lead to larger monthly benefits under Social Security than under the Alternate Plans. In those cases in which the worker died before working enough quarters to qualify for Social Security benefits, the surviving spouse would not be eligible for survivor benefits. Under the Alternate Plans, however, the survivor is immediately eligible to receive three times the employee’s wage and any account accumulations regardless of how long the employee worked. Because the Alternate Plans replace 60 percent of a disabled worker’s wage or salary and because disabled workers can also annuitize their account balances at the time of disability, the Alternate Plans often provide substantially better disability benefits than Social Security. This is especially true when no dependents are involved. Indexation of Social Security benefits for inflation can eventually close the gap, but it could take over 20 years to do so. For example, a 26-year-old low-income worker with no dependents would receive $711 monthly under Social Security, but $1,086 from the Alternate Plans. It would take a dozen years for indexation (at 3.5 percent per year) to raise the Social Security initial benefit to that received under the Alternate Plans. For a high-income 26-year-old, it would take more than 25 years to close the gap. Although the Alternate Plans still provide a larger initial monthly benefit in all the cases we simulated, the differences were narrowed when dependents were involved. Nevertheless, for high earners, even those with dependents, the Alternate Plans provided larger benefits, and indexation would not close the gap for 15 to 20 years. (See table 7.) The type of disability a worker has also influences how he or she fares under the two systems. Benefits for workers with “mental or nervous disorders” are limited to 12 months under the Alternate Plans. Workers with such disabilities would receive higher benefits under Social Security if their condition lasted over 12 months because Social Security does not limit benefits on the basis of impairment. Given the inherent differences between the two systems, our results suggest that benefits primarily depend on individual circumstances. Social Security was designed, in part, to protect low earners and their families, and indeed low-wage earners generally would do better under Social Security. Moreover, while individual circumstances play a role, particular features of Social Security, such as the spousal benefit and automatic cost-of-living adjustments, often result in larger Social Security benefits to recipients than the benefits available under the Alternate Plans. In addition, when dependent children are involved, survivor benefits can be higher under Social Security. Because the Alternate Plans do not tilt benefits in favor of low-wage earners, they can provide better benefits for high-wage workers. In terms of disability benefits, the Alternate Plans generally provide higher initial monthly benefits, especially for high-income workers. It is important to keep the results of our analysis in perspective. Our results reflect the specific features and conditions of the Alternate Plans and should not be construed as an analysis of the potential for individual accounts in general. For example, in an effort to mirror the “safety” of Social Security, the Alternate Plans have followed a conservative investment strategy wherein investments in common stocks are avoided. As a result, the Alternate Plans’ investments have had low returns—especially relative to those from the equities markets. Also, our projections of future Social Security benefits assume the benefits available today will be available in the future. Social Security benefits in the future could certainly be less than those we simulate depending on the reforms that are implemented to address the system’s long-term shortfall. Finally, many of the proposals for individual accounts do not call for the complete replacement of Social Security but rather provide for a two-tier system that combines the safety net, social insurance aspect of Social Security with the promise of higher returns from individual accounts. Overall, our analysis suggests that several of Social Security’s features make an important difference to the relatively less well-off, to single-earner married couples, and to families with dependent children. How these features are treated in any changes to Social Security could have important implications for these groups. We shared a draft of this report with Social Security personnel familiar with the program’s benefit structure, outside retirement income specialists, and individuals responsible for administering the Alternate Plans. We received technical comments from several reviewers and incorporated the comments as appropriate. Administrators for the Alternate Plans also provided us with updated figures, which we used in calculating benefits. In addition, these administrators pointed out that we should use the annuitized values of the accounts at the time of the disability to calculate the Alternate Plans disability benefits. We incorporated those changes. The administrators also noted that they were in the process of introducing a number of changes to the Alternate Plans that would improve benefits. They told us that they were introducing an annuity that provided for a 2- to 3-percent annual adjustment to protect against inflation. The administrators also said they were in the process of adding new benefits for surviving spouses and dependent children. The spouse would receive a lifetime benefit of 30 percent of the deceased worker’s income, and dependent children would receive an additional 30 percent. How much these benefits would cost had not been determined, and it was not clear how they would affect our comparisons. Finally, the Alternate Plans administrators told us that, in their view, we should have used the average returns that the plans’ investments made in the past 17 years in projecting future returns. We disagree. Returns on fixed income portfolios have declined significantly since the 1980s, and forecasts of future returns on the assets in fixed income portfolios do not envision a return to those higher levels. The projections we employed were for an asset whose performance has closely mirrored the performance of the Alternate Plans’ investments. We believe that is a more accurate estimate. We are providing copies of this report to the Commissioner of Social Security, officials of organizations and state and local governments that we worked with, and other interested congressional parties. Copies will also be made available to others upon request. Please contact me at (202) 512-7215 if you have any questions about this report. Other major contributors to this report are listed in appendix III. In order to compare potential retirement, survivor, and disability benefits under the Alternate Plans and Social Security, we simulated the work histories of county employees who had relatively low, median, or high earnings. We classified employees as low earners if they were at the 10th percentile of the wage distribution and as high earners if they were at the 90th percentile. Median earners are in the middle of the distribution (half earn more and half earn less). We used the 1998 wage distribution of Galveston County employees nearing retirement to determine low, median, and high earnings: $17,124, $25,596, and $51,263, respectively. Nationally, low, median, and high earnings were $13,000, $31,200 and $75,000. Low earners in Galveston County, therefore, had wages nearly one-third higher than those in the 10th percentile nationally, but the wages of high earners in Galveston were about 68 percent of those of the 90th percentile earners nationally; median wages of the Galveston County workers were 82 percent of the national median. In order to calculate Alternate Plans and Social Security benefits for our illustrative employees, we created earnings and contributions histories for these workers. We used a model of earnings growth over workers’ careers to reflect the fact that wage income does not grow linearly over a working lifetime, but rather that wage growth resembles an “s”-shaped curve. This curve reflects more rapid growth during the years when an individual’s productivity grows fastest and slower wage increases as the worker nears the end of his or her career. We used the earnings for workers nearing retirement in 1998 to project the nominal wages of such workers back to the beginning of their careers. We also used the model to project earnings experiences for those retiring in the future. We projected earnings at age 65 for workers retiring in the future in the three income classes by taking the wage distribution for 1998 earnings and inflating the earnings by nominal wage growth to the future retirement years, using the Social Security Trustees’ Intermediate Cost Assumptions (see app. II). We applied the model to create the wage histories. The coefficients used to create the earnings histories were developed and reported in T. Hungerford and G. Solon, “Sheepskin Effects in the Returns to Education,” Review of Economics and Statistics, 69(1), 1987. While actual earnings histories may have greater diversity over time than the wages produced by this model, this methodology allowed us to provide illustrative earnings patterns. To compute expected retirement, survivor, and disability benefits under the Alternate Plans, we calculated the expected balances in the accounts at the time of retirement, death, or onset of disability. Account balances depend on earnings, contributions, and investment income. We used the actual contribution rates that were in effect when the Alternate Plans began (Social Security payroll tax rates at the time) and adjusted the rates as they changed over time. Similarly, in projecting what the contributions would have been if the Alternate Plans had been in effect before 1981, we used the corresponding Social Security payroll tax rate. The contribution rates for the three counties differ only slightly, so we used the Galveston County contribution rates in generating our estimates. For future years, we assumed that current contribution rates would remain in effect. To arrive at the investment income, we obtained data on the interest rates earned on assets purchased by the Alternate Plans since 1981. To calculate the potential account balances for workers who entered county employment before 1981 or for future periods, we had to make some extrapolations. For the period 1963 to 1980, the funds’ portfolio manager was able to provide us with the investment income on similar types of investment vehicles offered by the firm. In projecting future earnings, we found that Social Security special Treasury securities were another fixed income asset whose earnings closely paralleled the experience of the Alternate Plans’ portfolios. The special Treasury securities issued to the Social Security Trust Funds closely mirrored the Alternate Plans’ investment earnings history. We used Intermediate Assumptions’ interest rate forecasts for the special Treasury securities developed for the Social Security Trustees 1998 Annual Report. To calculate Social Security benefits, we employed the Social Security Benefit Estimate Program for Personal Computers, known as the ANYPIA program, which is available on-line at www.ssa.gov. Finally, to calculate retirement and survivor benefits under the Alternate Plans, we calculated the monthly benefits that retirees or survivors would receive if they took their lump sum distributions and purchased either an individual life or a joint and survivor annuity. To estimate the monthly benefits, we obtained the annuity factors from the Alternate Plans’ insurance and annuity providers. We also received annuity factors from the Social Security Administration to calculate the lifetime monthly retirement benefits. Our simulations made a number of simplifying assumptions. We do not represent the simulations we undertook to be “typical,” but rather as illustrative of how workers and their families might fare under a range of circumstances. We assumed that individuals work continuously at one job for their entire working lives. We simulated 35-year and 45-year working lives and assumed that people retire at the normal Social Security retirement age. In reality, many individuals have very discontinuous work histories, work at many different places, and retire before the normal retirement age. Many people elect to take Social Security benefits when they first become eligible at age 62. We also assumed that Alternate Plan beneficiaries annuitized their lump sums, although currently very few elect life annuities. We made this assumption in order to put the two systems on an equal footing for benefit comparability. Average annual percentage in labor force(continued) Average annual percentage in labor force(continued) The real gross domestic product (GDP) is the value of total output of goods and services expressed in 1992 dollars. Francis P. Mulvey, Assistant Director, (202) 512-3592 Hans Bredfeldt James Lawson Christy Bonstelle Muldoon Barbara Smith Ken Stockbridge Bill Williams The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information on three Texas counties' employee retirement plans, known as Alternate Plans, focusing on: (1) comparing the principal features and benefits of these plans with those of social security; and (2) simulating the retirement, survivor, and disability benefits that individuals in varying circumstances might receive under the Alternate Plans and under social security. GAO noted that: (1) while social security and the Alternate Plans offer retirement, disability, and survivor benefits to qualified workers, there are fundamental differences in the purpose and structure of the two approaches; (2) Social Security is a social insurance program designed to provide a basic level of retirement income to help retired workers, disabled workers, and their dependents and survivors stay out of poverty; (3) Social Security benefits are tilted to provide relatively higher benefits to low-wage earners, and the benefits are fully indexed to protect against inflation; (4) social security is a pay-as-you-go system that is projected to produce a negative cash flow in 2013 and become insolvent by 2032; (5) the Alternate Plans are advance funded plans; the contributions made by workers and their employers, which total 13.915 percent of workers' pay, and the earnings made on those invested contributions are used to fund retirement benefits; (6) the Alternate Plans' benefits are directly linked to contributions, so that retirement income is based on accumulated contributions and the earnings thereon; (7) at retirement, the worker can take the money in the account as a lump sum or select from a number of monthly payout options, including the purchase of a lifetime annuity; (8) GAO found that certain features of social security, such as the progressive benefit formula and the allowance for spousal benefits, are important factors in providing larger benefits than the Alternate Plans for low-wage earners, single-earner couples, and individuals with dependents; (9) many median-wage earners, while initially receiving higher benefits under the Alternate Plans, would also have received larger benefits under social security after 4 and 12 years after retirement, because social security benefits are indexed for inflation; (10) the Alternate Plans provide larger benefits for higher-wage workers than social security would, but in some cases, such as when spousal benefits are involved, social security benefits could also exceed those of the Alternate Plans; (11) survivor benefits often would be greater under social security than under the Alternate Plans, especially when a worker died at a relatively young age and had dependant children; (12) with regard to disability benefits, all workers in GAO's simulations would receive higher initial benefits under the Alternate Plans; and (13) it is important to note that the Alternate Plans performance is not necessarily indicative of how well a proposed system of individual accounts with social security might perform. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
While the overall goal of Title II under both HEA and NCLBA is to improve student achievement by improving the teacher workforce, some of the specific approaches differ. For example, a major focus of HEA provisions is on the training of prospective teachers (preservice training) while NCLBA provisions focus more on improving teacher quality in the classroom (in service training) and hiring highly qualified teachers. Also, both laws use reporting mechanisms to increase accountability. However, HEA focuses more on institutions of higher education while NCLBA focuses on schools and school districts. Additionally, HEA focuses on expanding the teacher workforce by supporting recruitment from other professions. In addition, HEA and NCLBA Title II funds are distributed differently. HEA teacher quality funds are disbursed through three distinct types of grants: state, partnership, and recruitment grants. State grants are available for states to implement activities to improve teacher quality in their states by enhancing teacher training efforts, while partnership grants support the collaborative efforts of teacher training programs and other eligible partners. Recruitment grants are available to states or partnerships for teacher recruitment activities. All three types of grants require a match from non-federal sources. For example, states receiving state grants must provide a matching amount in cash or in-kind support from non-federal sources equal to 50 percent of the amount of the federal grant. All three grants are one-time competitive grants; however, state and recruitment grants are for 3 years while partnership grants are for 5 years. HEA amendments in 1998 required that 45 percent of funds be distributed to state grants, 45 percent to partnership grants, and 10 percent to recruitment grants. As of April 2007, 52 of the 59 eligible entities (states, the District of Columbia, and 8 territories) had received state grants. Because the authorizing legislation specifically required that entities could only receive a state grant once, only seven would be eligible to receive future state grants. In our 2002 report, we suggested that if Congress decides to continue funding teacher quality grants in the upcoming reauthorization of HEA, it might want to clarify whether all 59 entities would be eligible for state grant funding under the reauthorization, or whether eligibility would be limited to only those states that have not previously received a state grant. We also suggested that if Congress decides to limit eligibility to entities that have not previously received a state grant, it may want to consider changing the 45 percent funding allocation for state grants. In a 2005 appropriation act, Congress waived the allocation requirement. In 2006, about 9 percent of funds were awarded for state grants, 59 percent for partnership grants, and 33 percent for recruitment. When Congress reauthorizes HEA, it may want to further clarify eligibility and allocation requirements for this program. NCLBA, funded at a much higher level than HEA, provides funds to states through annual formula grants. In 2006, Congress appropriated $2.89 billion through NCLBA and $59.9 million for HEA for teacher quality efforts. While federal funding for teacher initiatives was provided through two other programs prior to NCLBA, the act increased the level of funding to help states and districts implement the teacher qualification requirements. States and districts generally receive NCLBA Title II funds based on the amount they received in 2001, the percentage of children residing in the state or district, and the number of those children in low- income families. After reserving up to 1 percent of the funds for administrative purposes, states pass 95 percent of the remaining funds to the districts and retain the rest to support state-level teacher initiatives and to support NCLBA partnerships between higher education institutions and high-need districts that work to provide professional development to teachers. While there is no formula in NCLBA for how districts are to allocate funds to specific schools, the act requires states to ensure that districts target funds to those schools with the highest number of teachers who are not highly qualified, schools with the largest class sizes, or schools that have not met academic performance requirements for 2 or more consecutive years. In addition, districts applying for Title II funds from their states are required to conduct a districtwide needs assessment to identify their teacher quality needs. NCLBA also allows districts to transfer these funds to most other major NCLBA programs, such as those under Title I, to meet their educational priorities. HEA provides grantees and NCLBA provides states and districts with the flexibility to use funds for a broad range of activities to improve teacher quality, including many activities that are similar under both acts. HEA funds can be used, among other activities, to reform teacher certification requirements, professional development activities, and recruitment efforts. In addition, HEA partnership grantees must use their funds to implement reforms to hold teacher preparation programs accountable for the quality of teachers leaving the program. Similarly, acceptable uses of NCLBA funds include teacher certification activities, professional development in a variety of core academic subjects, recruitment, and retention initiatives. In addition, activities carried out under NCLBA partnership grants are required to coordinate with any activities funded by HEA. Table 1 compares activities under HEA and NCLBA. With the broad range of activities allowed under HEA and NCLBA, we found both similarities and differences in the activities undertaken. For example, districts chose to spend about one-half of their NCLBA Title II funds ($1.2 billion) in 2004-2005 on class-size reduction efforts, which is not an activity specified by HEA. We found that some districts focused their class-size reduction efforts on specific grades, depending on their needs. One district we visited focused its NCLBA-funded class-size reduction efforts on the eighth grade because the state already provided funding for reducing class size in other grades. However, while class-size reduction may contribute to teacher retention, it also increases the number of classrooms that need to be staffed and we found that some districts had shifted funds away from class-size reduction to initiatives to improve teachers’ subject matter knowledge and instructional skills. Similarly, Education’s data showed that the percent of NCLBA district funds spent on class-size reduction had decreased since 2002-2003, when 57 percent of funds were used for this purpose. HEA and NCLBA both funded professional development and recruitment efforts, although the specific activities varied somewhat. For example, mentoring was the most common professional development activity among the HEA grantees we visited. Of the 33 HEA grant sites we visited, 23 were providing mentoring activities for teachers. In addition, some grantees used their funds to establish a mentor training program to ensure that mentors had consistent guidance. One state used the grant to develop mentoring standards and to build the capacity of trainers to train teacher mentors within each district. Some districts used NCLBA Title II funds for mentoring activities as well. We also found that states and districts used NCLBA Title II funds to support other types of professional development activities. For example, two districts we visited spent their funds on math coaches who perform tasks such as working with teachers to develop lessons that reflected state academic standards and assisting them in using students’ test data to identify and address students’ academic needs. Additionally, states used a portion of NCLBA Title II funds they retained to support professional development for teachers in core academic subjects. In two states that we visited, officials reported that state initiatives specifically targeted teachers who had not met the subject matter competency requirements of NCLBA. These initiatives either offered teachers professional development in core academic subjects or reimbursed them for taking college courses in the subjects taught. Both HEA and NCLBA funds supported efforts to recruit teachers. Many HEA grantees we interviewed used their funds to fill teacher shortages in urban schools or to recruit new teachers from nontraditional sources— mid-career professionals, community college students, and middle- and high-school students. For example, one university recruited teacher candidates with undergraduate degrees to teach in a local school district with a critical need for teachers while they earn their masters in education. The program offered tuition assistance, and in some cases, the district paid a full teacher salary, with the stipulation that teachers continue teaching in the local school district for 3 years after completing the program. HEA initiatives also included efforts to recruit mid-career professionals by offering an accelerated teacher training program for prospective teachers already in the workforce. Some grantees also used their funds to recruit teacher candidates at community colleges. For example, one of the largest teacher training institutions in one state has partnered with six community colleges around the state to offer training that was not previously available. Finally, other grantees targeted middle and high school students. For example, one district used its grant to recruit interns from 14 high-school career academies that focused on training their students for careers as teachers. Districts we visited used NCLBA Title II funds to provide bonuses to attract successful administrators, advertise open teaching positions, and attend recruitment events to identify qualified candidates. In addition, one district also used funds to expand alternative certification programs, which allowed qualified candidates to teach while they worked to meet requirements for certification. Finally, some states used HEA funds to reform certification requirements for teachers. Reforming certification or licensing requirements was included as an allowable activity under both HEA and NCLBA to ensure that teachers have the necessary teaching skills and academic content knowledge in the subject areas. HEA grantees also reported using their funds to allow teacher training programs and colleges to collaborate with local school districts to reform the requirements for teacher candidates. For example, one grantee partnered with institutions of higher education and a partner school district to expose teacher candidates to urban schools by providing teacher preparation courses in public schools. Under both HEA and NCLBA, Education has provided assistance and guidance to recipients of these funds and is responsible for holding recipients accountable for the quality of their activities. In 1998, Education created a new office to administer HEA grants and provide assistance to grantees. While grantees told us that the technical assistance the office provided on application procedures was helpful, our previous work noted several areas in which Education could improve its assistance to HEA grantees, in part through better guidance. For example, we recommended that in order to effectively manage the grant program, Education further develop and maintain its system for regularly communicating program information, such as information on successful and unsuccessful practices. We noted that without knowledge of successful ways of enhancing the quality of teaching in the classroom, grantees might be wasting valuable resources by duplicating unsuccessful efforts. Since 2002, Education has made changes to improve communication with grantees and potential applicants. For example, the department presented workshops to potential applicants and updated and expanded its program Web site with information about program activities, grant abstracts, and other teacher quality resources. In addition, Education provided examples of projects undertaken to improve teacher quality and how some of these efforts indicate improved teacher quality in its 2005 annual report on teacher quality. Education also has provided assistance to states, districts and schools using NCLBA Title II funds. The department offers professional development workshops and related materials that teachers can access online through Education’s website. In addition, Education assisted states and districts by providing updated guidance. In our 2005 report, officials from most states and districts we visited who use Education’s Web site to access information on teacher programs or requirements told us that they were unaware of some of Education’s teacher resources or had difficulty accessing those resources. We recommended that Education explore ways to make the Web-based information on teacher qualification requirements more accessible to users of its Web site. Education immediately took steps in response to the recommendation and reorganized information on its website related to the teacher qualification requirements. In addition to providing assistance and guidance, Education is responsible for evaluating the efforts of HEA and NCLBA recipients and for overseeing program implementation. Under HEA, Education is required to annually report on the quality of teacher training programs and the qualifications of current teachers. In 2002, we found that the information collected for this requirement did not allow Education to accurately report on the quality of HEA’s teacher training programs and the qualifications of current teachers in each state. In order to improve the data that states are collecting from institutions that receive HEA teacher quality grants, and all those that enroll students who receive federal student financial assistance and train teachers, we recommended that Education should more clearly define key data terms so that states provide uniform information. Further, in 2004, the Office of Management and Budget (OMB) completed a Program Assessment Rating Tool (PART) assessment of this program and gave it a rating of “results not demonstrated,” due to a lack of performance information and program management deficiencies. Education officials told us that they had aligned HEA’s data collection system with NCLBA definitions of terms such as “highly qualified teacher.” However, based on the PART assessment, the Administration proposed eliminating funding for HEA teacher quality grants in its proposed budgets for fiscal years 2006-2008, and redirecting the funds to other programs. Congress has continued to fund this program in fiscal years 2006 and 2007. Education has responded to our recommendations and issues raised in the PART assessment related to evaluating grantee activities and providing more guidance to grantees on the types of information needed to determine effectiveness. When the Congress amended HEA in 1998 to provide grants to states and partnerships, it required that Education evaluate the activities funded by the grants. In 2005, Education established performance measures for two of the teacher quality enhancement programs—state grants and partnership grants—and required grantees to provide these data in their annual performance plans submitted to Education. The performance measure for state grants is the percentage of prospective teachers who pass subject matter tests, while the measure for partnership grants is the percentage of participants who complete the program and meet the definition of being “highly qualified.” In addition, in 2006, Education included information in letters to grantees on the types of information that it requires to assess the effectiveness of its teacher quality programs. For example, in its letters to state grantees, Education noted that when reporting on quantitative performance measures, grantees must show how their actual performance compared to the targets (e.g., benchmarks or goals) that were established in the approved grant application for each budget period. In addition, in May 2006, Education issued its final report on HEA’s partnership grants, focusing on the 25 grantees of the 1999 cohort. The goal of the study was to learn about the collaborative activities taking place in partnerships. It was designed to examine approaches for preparing new and veteran teachers and to assess the sustainability of project activities after the grant ends. Among its findings, Education reported that partnerships encouraged and supported collaboration between institutions of higher education and schools to address teacher preparation needs. Under NCLBA, Education holds districts and schools accountable for improvements in student academic achievement, and holds states accountable for reporting on the qualifications of teachers. NCLBA set the end of the 2005-2006 school year as the deadline for teachers of core academic subjects, such as math and science, to be highly qualified. Teachers meeting these requirements must (1) have at least a bachelor’s degree, (2) be certified to teach by their state, and (3) demonstrate subject matter competency in each core academic subject they teach. Education collects state data on the percent of classes taught by highly qualified teachers and conducts site visits in part to determine whether states appropriately implemented highly qualified teacher provisions. In state reviews conducted as part of its oversight of NCLBA, Education identified several areas of concern related to states’ implementation of teacher qualification requirements and provided states feedback. For example, some states did not include the percentage of core academic classes taught by teachers who are not highly qualified in their annual state report cards, as required. In addition, because some states inappropriately defined teachers as highly qualified, the data that these states reported to Education were inaccurate according to a department official. In many states, the requirements for teachers were not sufficient to demonstrate subject matter competency. Since subject matter competency is a key part of the definition of a highly qualified teacher, such states’ data on the extent to which teachers have met these requirements could be misleading. Education also found that a number of states were incorrectly defining districts as high-need, in order to make more districts eligible for partnerships with higher education institutions. According to Education, each of these states corrected their data and the department will continue to monitor states to ensure they are using the appropriate data. In addition to Education’s oversight efforts, OMB completed a PART assessment of NCLBA Title II in 2005 and rated the program as “moderately effective.” While OMB noted that the program is well- managed, it also noted that the program has not demonstrated cost- effectiveness and that an independent evaluation has not been completed to assess program effectiveness. In response to OMB’s assessment, Education took steps to more efficiently monitor states and conducted two program studies related to teacher quality. An Education official told us that the program studies had been conducted but the department has not yet released the findings. In conclusion, the nation’s public school teachers play a key role in educating 48 million students, the majority of our future workforce. Recognizing the importance of teachers in improving student performance, the federal government, through HEA and NCLBA, has committed significant resources and put in place a series of reforms aimed at improving the quality of teachers in the nation’s classrooms. With both acts up for reauthorization, an opportunity exists for the Congress to explore potential interrelationships in the goals and initiatives under each act. While HEA and NCLBA share the goal of improving teacher quality, it is not clear the extent to which they complement each other. Our separate studies of teacher quality programs under each of the laws have found common areas for improvement, such as data quality and assistance from Education. We have also found that states, districts, schools, and grantees under both laws engage in similar activities. However, not much is known about how well, if at all, these two laws are aligned. Thus, there may be opportunities to better understand how the two laws are working together at the federal, state, and local level. For example, exploring links between efforts aimed at improving teacher preparation at institutions of higher education and efforts to improve teacher quality at the school or district level could identify approaches to teacher preparation that help schools the most. Mr. Chairman, this concludes my prepared statement. I welcome any questions you or other Members of this Subcommittee may have at this time. For further information regarding this testimony, please contact me at 202- 512-7215. Individuals making key contributions to this testimony include Harriet Ganson, Bryon Gordon, Elizabeth Morrison, Cara Jackson, Rachel Valliere, Christopher Morehouse, and Jessica Botsford. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Teachers are the single largest resource in our nation's elementary and secondary education system. However, according to recent research, many teachers lack competency in the subjects they teach. In addition, research shows that most teacher training programs leave new teachers feeling unprepared for the classroom. While the hiring and training of teachers is primarily the responsibility of state and local governments and institutions of higher education, the federal investment in enhancing teacher quality is substantial and growing. In 1998, the Congress amended the Higher Education Act (HEA) to enhance the quality of teaching in the classroom and in 2001 the Congress passed the No Child Left Behind Act (NCLBA), which established federal requirements that all teachers of core academic subjects be highly qualified. This testimony focuses on (1) approaches used in teacher quality programs under HEA and NCLBA, (2) the allowable activities under these acts and how recipients are using the funds, and (3) how Education supports and evaluates these activities. This testimony is based on prior GAO reports. We updated information where appropriate. While the overall goal of Title II in both HEA and NCLBA is to improve teacher quality, some of their specific approaches differ. For example, a major focus of HEA provisions is on the training of prospective teachers while NCLBA provisions focus more on improving teacher quality in the classroom and hiring highly qualified teachers. Both laws use reporting mechanisms to increase accountability; however, HEA focuses more on institutions of higher education while NCLBA focuses on schools and districts. In addition, HEA and NCLBA grants are funded differently, with HEA funds distributed through one-time competitive grants, while Title II under NCLBA provides funds annually to all states through a formula. Both acts provide states, districts, or grantees with the flexibility to use funds for a broad range of activities to improve teacher quality, including many activities that are similar, such as professional development and recruitment. A difference is that NCLBA's Title II specifies that teachers can be hired to reduce class-size while HEA does not specifically mention class-size reduction. Districts chose to spend about one-half of their NCLBA Title II funds on class-size reduction in 2004-2005. On the other hand, professional development and recruitment efforts were the two broad areas where recipients used funds for similar activities, although the specific activities varied somewhat. Many HEA grantees we visited used their funds to fill teacher shortages in urban schools or recruit teachers from nontraditional sources, such as mid-career professionals. Districts we visited used NCLBA funds to provide bonuses, advertise open teaching positions, and attend recruitment events, among other activities. Under both HEA and NCLBA, Education has provided assistance and guidance to recipients of these funds and is responsible for holding recipients accountable for the quality of their activities. GAO's previous work identified areas where Education could improve its assistance on teacher quality efforts and more effectively measure the results of these activities. Education has made progress in addressing GAO's concerns by disseminating more information to recipients, particularly on teacher quality requirements, and improving how the department measures the results of teacher quality activities by establishing definitions and performance targets under HEA. While HEA and NCLBA share the goal of improving teacher quality, it is not clear the extent to which they complement each other. States, districts, schools, and grantees under both laws engage in similar activities. However, not much is known about how well, if at all, these two laws are aligned. Thus, there may be opportunities to better understand how the two laws are working together at the federal, state, and local level. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Located just north of the equator in the Pacific Ocean are the two island nations of the Federated States of Micronesia (FSM) and the Republic of the Marshall Islands (RMI) (see fig. 1). The FSM, which is comprised of the states of Pohnpei, Chuuk, Yap, and Kosrae, had a population of about 107,000 in 2000, while the RMI had a population of 50,840 in 1999, based on the most recent census data. In 1947, the United Nations created the Trust Territory of the Pacific Islands (UN TTPI). The United States entered into a trusteeship with the United Nations Security Council and became the administering authority of the four current states of the FSM, as well as the Marshall Islands, Palau, and the Northern Mariana Islands. The UN TTPI made the United States responsible financially and administratively for the region. The four states of the FSM voted in a 1978 referendum to become an independent nation, while the Marshall Islands established its constitutional government and declared itself a republic in 1979. Both locations remained subject to the authority of the United States under the trusteeship agreement until 1986. Late that year, an international agreement called the Compact of Free Association went into effect between the United States and these two new nations and provided for substantial U.S. direct economic assistance for 15 years in order to help both countries move toward a goal of economic self-sufficiency. The Department of the Interior’s Office of Insular Affairs (OIA) has been responsible for disbursing and monitoring this direct economic assistance, which totaled almost $1.6 billion from 1987 through 1998. In 2000 we reported that both nations have made some progress in achieving economic self-sufficiency but remain heavily financially dependent upon the United States. In addition to economic assistance, under the Compact the United States provided access to federal services and programs, an obligation to defend the two Pacific Island nations, and migration rights. For its part, the United States received defense rights in these two countries under the Compact. The Compact exempts FSM and RMI citizens migrating to the United States from meeting U.S. passport, visa, and labor certification requirements of the 1952 Immigration and Nationality Act, as amended (P.L. 82-414). The same migration provisions are included in the 1994 Compact with the Republic of Palau. The migration provisions of the two Compacts also allow FSM, RMI, and Palau (or, collectively, Freely Associated States–FAS) migrants to enter into, lawfully engage in occupations, and establish residence in the United States (including all U.S. states, territories, and possessions) without limitations on their length of stay. U.S. Immigration and Naturalization Service (INS) officials have stated that these rights granted to FAS migrants are unique; there are no other nations whose citizens enjoy this degree of access to the United States. At the time of the original negotiations, the U.S. Compact negotiator stated that the Compact’s migration rights were meant to strengthen ties between the United States and the Freely Associated States. All Compact migrants in the United States are legally classified as “nonimmigrants,” a status that typically signifies nonpermanent visitors such as tourists or students. However, while not legally classified as such, Compact migrants can behave similarly to “immigrants,” in that they can stay in the United States as long as they choose with few restrictions.Compact migrants can become U.S. citizens by applying for legal permanent resident status under standard immigration procedures. The Congress authorized compensation in the Compacts’ enabling legislation for U.S. island areas that might experience increased demands on their educational and social services by Compact migrants from these Pacific Island nations. Further, the legislation required the President to report and make recommendations annually to the Congress regarding adverse consequences resulting from the Compact. The Department of the Interior’s OIA has been responsible for collecting information regarding Compact impact on U.S. island areas. Interior stated in 1989 correspondence with the government of Guam that “social services” eligible for impact compensation include public health and public safety services, and that the cost of services provided by private or nongovernment agencies are not eligible for reimbursement. In addition to authorized financial compensation, the Compact provided another option for addressing the impact of migrants: certain nondiscriminatory limitations may be placed on the rights of these migrants to remain in U.S. territories and possessions. While Compact migrants can travel to any U.S. area (including the U.S. mainland), U.S. areas that have drawn migrants due to their close proximity are Guam, the Commonwealth of the Northern Mariana Islands (referred to as the CNMI), and the State of Hawaii. Hawaii had the largest year 2000 total population of the three U.S. island destinations, at 1,211,537, while the populations of Guam and the CNMI were 154,805 and 69,221, respectively. All three locations have opportunities in the areas of employment, education, health care, and social services that have attracted FAS migrants. Since the two Compacts were enacted, thousands of FAS citizens have migrated to U.S. island areas in the Pacific. According to the 1997 and 1998 OIA surveys, Guam had the highest number of Compact migrants at 6,550, followed by Hawaii and the CNMI. For those surveyed, the destination for migrants shifted from the CNMI prior to 1985 to Guam over the next decade, shifting to Hawaii in the mid-1990s (and now, reportedly, to the U.S. mainland). It is primarily for employment opportunities that migrants have been moving to U.S. areas, with more dependent family members of employed workers migrating since implementation of the Compacts. Educational opportunities have also served as a motivation to migrate. The majority of migrants were living in poverty in all three U.S. areas, with the CNMI having the lowest migrant poverty rates. Of note, the CNMI had the highest percentage of working age FAS migrants participating in the labor force at over 65 percent. In the three U.S. areas, many Compact migrants were working in jobs that required few skills and paid low wages, such as cleaning or food services. In addition, Compact migrants surveyed were not highly educated, with few having college degrees and just over 50 percent having graduated from high school. Thousands of FAS citizens have moved to U.S. areas in the Pacific under the Compacts, with the highest number of Compact migrants living in Guam. According to OIA surveys, about 6,550 and 5,500 Compact migrants were living in Guam and Hawaii, respectively, in 1997, while 1,755 were living in the CNMI in 1998 (see table 1). This sums to more than 13,800 persons--far more than the 2,500 migrants living in these three U.S. areas who arrived prior to the Compacts. FAS migrants, which include those who arrived prior and subsequent to implementation of the Compacts, accounted for about 5 percent of Guam’s total population and around 4 percent of the CNMI’s total population. In contrast, they accounted for only 0.5 percent of Hawaii’s total population. There were substantially fewer Palauans living in the CNMI in 1998 who came during the Palau Compact period (1995-1998) than there were those who arrived prior to the Compact. In addition, while there are very few pre-Compact or Compact Marshallese migrants living in either Guam or the CNMI, more than 35 percent of all Compact migrants living in Hawaii were Marshallese in 1997. Forty percent of all FAS migrants in U.S. island areas at the time of the surveys were born in the FSM state of Chuuk, the poorest state in the FSM. These data may be an undercount of FAS migrants due to the methodology used to collect the information (see app. II). OIA 1997 and 1998 data show that FAS migrants surveyed migrated to Guam, Hawaii, and the CNMI at different points in time. As shown in table 1, the CNMI had the highest number of combined pre-Compact migrants from the FSM, the RMI, and Palau present in 1998 (1,192, compared to 730 for Guam and 610 for Hawaii). At the time of the OIA surveys in 1997 and 1998, the CNMI had received and retained slightly more migrants from the FSM, the RMI, and Palau (combined) for several years prior to 1985. The destinations of migrants from the FSM, the RMI, and Palau then shifted, with substantially more migrating to Guam over the next decade, then shifting to Hawaii in the mid 1990s (see fig. 2). Of note, migration flows under the Compacts appear to have followed traditional migration patterns, with young males migrating first for employment, followed by migration of family members. Employment was the key reason cited by FAS migrants in the 1997 and 1998 OIA surveys for coming to Guam and the CNMI, totaling about 40 percent for each (totaling more than 3,500 migrants). Migrants from the FSM and the RMI explained to us that they moved to the U.S. areas to find a job, given the lack of employment opportunities at home. In addition, over 20 percent of FAS citizens moved to Guam and the CNMI as dependents. In Hawaii, FAS migrants also chose employment and being dependents of an employed worker as key reasons for migrating (at 15 percent and 11 percent, respectively) as well as medical care (at 6 percent) in the OIA survey. However, a greater proportion of RMI migrants in Hawaii came for medical reasons (10 percent) than for employment (7 percent). Educational opportunities at both the college and high school levels have also served as motivations for migration, according to interviews with migrant communities and FAS officials in the three U.S. areas. The University of Hawaii has provided data to us showing that FAS student enrollment had risen since the Compacts were implemented. Specifically, FAS student enrollment in the University of Hawaii increased from 54 students in 1986 to 292 students in 1999. There were considerable differences in poverty rates and employment levels between the three U.S. island areas and among the migrant groups. For example, the CNMI had the lowest rate of FAS migrants living in poverty, at about 51 percent, compared to 67 percent in Guam (see table 2). Poverty rates were generally higher for Compact migrants than they were for migrants that arrived prior to the Compacts. Labor force participation (those able and willing to work) and employment of migrants (those actually employed) differed between the three U.S. areas. Labor force participation and employment levels were the lowest in Hawaii, with 46 percent of working-age FAS migrants in the labor force and 39 percent employed. In contrast, in the CNMI, nearly 66 percent of working-age FAS migrants reported that they were in the labor force, and nearly 60 percent were employed. Guam was in the middle, with 58 percent in the labor force and 52 percent employed. Compact migrants who found employment in U.S. areas had primarily private sector jobs requiring few skills and paying low wages. U.S. island government officials and migrant community members told us that Compact migrants often accept jobs that local workers refuse to take. An official representing the garment manufacturing industry in the CNMI noted that FAS employees are good workers and are rarely absent from work. In Guam (1997), Compact migrants largely worked in retail (drinking and eating establishments), hotels and motels, and construction, according to the OIA surveys. In Hawaii (1997), Compact migrants also largely worked in retail, followed by agriculture and business services (such as cleaning). In contrast, Compact migrants in the CNMI (1998) largely worked in apparel manufacturing, followed by retail, hotels and motels, and transportation and communications. Compact migrants have obtained limited education, according to the 1997 and 1998 OIA surveys. Just over half of all Compact migrants age 25 and older had received their high school diplomas, less than 2 percent had earned 4-year college degrees, and less than 4 percent had earned 2-year community college degrees. The 1997 and 1998 OIA surveys show that Hawaii had the largest portion of Compact migrants with high school degrees, at 55 percent, while about 50 percent of Compact migrants in Guam and 44 percent in the CNMI had high school degrees. Moreover, the percentage of FAS migrants from the FSM in Guam with high school degrees decreased during the 1990s, while rising and falling over this time period in the CNMI. According to OIA survey data, a larger portion of FAS migrants from Palau had high school degrees than other FAS migrants. Guam, Hawaii, and the CNMI governments have identified significiant Compact migration impact. The three U.S. island areas have estimated costs to local governments of at least $371 million for 1986 through 2000 that are associated with services provided to migrants from the FSM, the RMI, and Palau. All three U.S. island areas have reported that costs have been concentrated in the areas of health and education, though other costs have also been identified. Finally, concerns have been raised by all three U.S. areas, though primarily Hawaii, about public health problems associated with Compact migrants. Of note, U.S. island area impact estimates do not include the positive impact of FAS migrants. While all three U.S. island area governments have acknowledged that FAS migrants have had positive impacts, such as contributing to the tax base and filling employment needs, the Compact’s enabling legislation specifically requires reports on adverse impact and does not request information regarding positive impact. Regarding the impact of migration on the FSM and the RMI, the populations of both nations have shown reduced growth in recent years despite continued high birth rates, and government officials in both countries view the Compact’s migration provisions as critical to providing migrants with economic opportunities that are not available in these small countries. The governments of Guam, Hawaii, and the CNMI, which have estimated Compact migrant impact that collectively totals between $371 million and $399 million, have determined that the cost of FAS migrants to the local governments has been significant. Guam’s total estimate for the entire Compact period (1986-2000) accounts for about half ($180 million) of the total impact estimate range for all three areas (see table 3). The CNMI also has impact estimates for the entire Compact period and has a total impact estimate range of $105 million to $133 million. Hawaii has prepared estimated impact costs only for 1996 through 2000, though these reports identify some costs for earlier years. Thus, for the most part, Hawaii does not have estimates for 10 years that are covered by the other two areas (1986-1995). Hawaii has identified about $86 million in total impact costs. Costs for the three areas have been focused in the areas of health care and education, though public safety and welfare costs have also been identified. While the reported impact costs of Guam and Hawaii have been increasing over time, the CNMI’s impact estimates decreased by almost 40 percent from fiscal year 1998 to fiscal year 2000. This reduction is reportedly due to a decreasing presence of FAS migrants in the CNMI. The 2000 impact estimates prepared by the three areas showed that impact amounts represented about 7 percent, 0.5 percent, and 4 percent of the budget revenues of Guam, Hawaii, and the CNMI, respectively, for that year. The health care systems of the FSM and the RMI are viewed by U.S. and U.S. island area governments as inadequate to meet the needs of the population, providing incentive to travel or move to the United States in order to receive appropriate health care. Health costs were the greatest area of impact for the CNMI in 2000. In that year, 43 percent ($4 million) of all identified CNMI impact costs were related to health care. Emergency, general, dental, and pediatric care provided by the CNMI Department of Public Health (the government agency responsible for providing health services and administering the Community Health Center) were identified as high-cost migrant services. According to a CNMI Department of Public Health Services official, neonatal intensive care is a key issue for FAS migrants. This official reported that expectant mothers often have no insurance and have no prenatal care at all until they arrive at the Community Health Center, ready to deliver. Guam’s largest single area of impact in health in its 2000 impact assessment was identified as unpaid services by Guam Memorial Hospital (which receives government funding) to FAS patients, totaling over $5.4 million in 2000. Officials from Guam Memorial Hospital expressed frustration with FAS patients and noted that these patients often rely on the hospital’s emergency room for primary health care and that many conditions treated are not urgent. The emergency room treats about 3,000 patients per month; about 350 of those patients (12 percent) are FAS patients (compared with FAS representation of 5 percent of Guam’s population). As in the CNMI, problems with expectant FAS mothers arriving at the hospital close to delivery and with no prior prenatal care were mentioned. The Governor of Guam told us that in his view the U.S. naval hospital on Guam is underutilized and could provide care for FAS migrants. Hawaii’s government health-related cost of $3.7 million in 2000 went to support FAS migrants who, as of April 2000, no longer receive federal health benefits due to welfare reform legislation. These health benefits for FAS migrants are now funded solely by the state. The Personal Responsibility and Work Opportunity Reconciliation Act of 1996, as amended, (P.L. 104-193), referred to as the Welfare Reform Act, cut certain federal public benefits to some legal aliens, including migrants who enter under the Compact. Medicaid, a program to provide funding to low-income individuals for health care and whose costs are shared between the federal and state governments, is one of the federal programs that is no longer available to FAS migrants. This loss of eligibility has been cited as a reason for expected increases in impact costs for Hawaii, as the state has decided to provide state funding in place of lost federal funds. An Hawaii Department of Health official noted that it is illogical for the United States to make migration to the United States easily accessible for poor FAS citizens but then make health care difficult to obtain. Inadequate school systems in the FSM and the RMI are viewed by FAS and U.S. island governments as another reason for migration. For example, an RMI government official said that RMI schools are very bad, with insufficient supplies and unqualified teachers, and Marshallese citizens migrate in search of better educational opportunities. According to education officials in Guam and the CNMI, there is an incentive for FSM students to come to those two U.S. locations for public education, as teachers in the FSM do not have 4-year university degrees and the education infrastructure is inadequate. In parts of the FSM only a portion of students are selected to attend high school. Some portion of the students who are not selected, as well as those who live in areas with insufficient school facilities, then reportedly move to U.S. areas to attend high school. FSM and RMI migrants told us that they moved to U.S. areas to attend school themselves or to enroll their children in school. Guam’s and Hawaii’s costs in 2000 were primarily in public education, at $17 million and $10.6 million, respectively (54 percent and 58 percent of total estimated impact). The CNMI’s education costs were $2.8 million (31 percent of total impact costs) in 2000. In their most recent impact reports, FAS students accounted for about 11 percent, 1 percent, and 9 percent of the total student population in Guam, Hawaii, and the CNMI. Officials from the Departments of Education in Guam and Hawaii noted that FAS students have a tendency to be rather transient, entering and leaving school a few times each year. Moreover, education officials in Guam and the CNMI said that some FAS students have never been in a school classroom prior to moving to a U.S. area. This makes their integration into the school system difficult. Calculations prepared by the U.S. island governments may have underestimated certain education costs, as not all students and not all costs were captured. For example, officials from the Hawaii Department of Education told us that, rather than calculating costs for all FAS students, they only estimated costs associated with FAS students who participated in the state’s English as a Second Language program. Further, education officials in all three U.S. locations told us that, while education costs were calculated based on an average cost per student for the entire student population, FAS students have higher costs than other students due to poor language and other skills. None of the areas quantified the costs associated with additional efforts required to assist FAS students. The three island areas have also identified other Compact impact costs, though all were small in comparison to those related to health and education and accounted for about 25 percent or less of total impact costs in the most recent impact estimates. For fiscal year 2000, Guam identified $4.6 million in costs related to public assistance programs and the Department of Corrections. For 2000, Hawaii estimated $3.2 million in welfare assistance provided to FAS migrants. Finally, for fiscal year 2000, the CNMI calculated an additional $2.4 million, which is almost entirely attributable to its Department of Public Safety (which includes police, fire, and corrections services). In addition to financial costs, public health concerns have been raised as migrant impacts, particularly by Hawaii, due to the number of Compact migrants with communicable diseases entering U.S. island areas. For example, in its 1997 impact assessment, Hawaii stated that public health was the state’s most pressing concern and noted a recent outbreak of Hansen’s Disease (leprosy) on the island of Hawaii. A CNMI Department of Public Health Services official also told us that the number of cases of tuberculosis and Hansen’s Disease diagnosed for FAS citizens is increasing, and a Guam Department of Public Health and Social Services official reported that concerns exist regarding communicable diseases, low immunization rates, and noncompliance with treatment regimens for FAS migrants. Hawaii Department of Health officials told us that controlling communicable disease problems within FAS communities can be difficult; migrants do not seek regular medical attention and so require extensive outreach from the Department in order to identify and effectively treat communicable diseases. Further, Department officials noted that health screenings of Compact migrants are not required for entry into the United States, preventing the identification and treatment of communicable diseases prior to arrival in a U.S. area. INS officials confirmed that health screenings are not required of Compact migrants, nor are they enforced for any nonimmigrant group. INS officials told us that the agency “is the first line of defense” for identifying travelers to the United States who may have communicable health problems. They acknowledged, however, that INS officers are not trained or legitimately qualified in this area, and it is difficult for them to identify travelers with health problems. The populations of both nations have been affected by the out-migration provided for under the Compact. From 1989 to 1994, the FSM population grew 1.9 percent annually, from 95,740 to 105,506. During the next 5-year period, the FSM population grew by only about 1,500 people (.2 percent annually) to reach about 107,000. This very small increase demonstrates that the FSM population has almost stopped growing in recent years, reportedly due to out-migration. Birth rates remain high in both countries. FSM government officials expressed the view that migration rates are increasing and cited a reduction in government jobs following cuts in U.S. funding as a key reason why FSM citizens have migrated in recent years, in addition to employment and education opportunities and access to health care. Because of out-migration, population growth in the RMI since 1988, when the population was 43,380, has slowed considerably to 1.5 percent annually. This is the lowest rate of growth since 1958. The 1999 RMI census reported 50,840 persons in the RMI, which was about 10,000 fewer people than the RMI government had projected. Emigration is reported as the primary reason for the lower population growth. RMI government officials told us that the rate of migration out of the RMI has increased over the past 5 years and cited the recent public sector reform program, which eliminated government jobs, as a key reason for this increase. Government officials from the FSM and the RMI view the Compact’s migration rights as key to easing problems associated with limited economic opportunities and population growth on these small island nations. While the RMI government does not have an official policy regarding migration, it has published a document encouraging overseas employment, and a recent draft planning document suggested that the government “encourage the emigration of whole families, equivalent to the annual population increase (1,500-2,000 persons) to permanent residence overseas.” The FSM government does not have an official policy on emigration. The Compact and its enabling legislation include two options to address the impact of migrants. The law, which states that the Congress will act “sympathetically and expeditiously” to redress adverse consequences of Compact implementation, provided authorization for appropriation of funds to cover the costs incurred, if any, by Hawaii, Guam, and the CNMI resulting from any increased demands placed on educational and social services by migrants from the FSM and the RMI. Guam has received about $41 million in compensation and the CNMI has received almost $3.8 million. Hawaii has received no compensation. Further, the Compact states that nondiscriminatory limitations may be placed on the rights of Compact migrants to establish “habitual residence” (continuing residence) in a territory or possession of the United States. Such limitations went into effect in September 2000 and are viewed by INS officials we interviewed as difficult to enforce and, therefore, unlikely to have much impact. The extent to which these two options have been used has not met with the satisfaction of any of the three U.S. island area governments, who believe, among other things, that additional funding for impact costs is necessary. Compact enabling legislation required the President (who designated the Department of the Interior as the responsible agency) to prepare annual Compact impact reports and submit them to the Congress. While these reports do not require any action, they can serve as a tool to assist the U.S. government in determining whether and how to address Compact impact. However, only seven of these reports have been prepared during the 15-year Compact period. Further, Interior has taken limited action to ensure that U.S. island areas estimate impact costs consistently, resulting in reports that contain varying information for each U.S. island area and do not easily allow comparisons to determine relative impact across locations. Two specific options are available in the Compact and its enabling legislation to address Compact impact: financial compensation and limitations on the rights of FAS migrants to establish continuing residence in a U.S. territory or possession. U.S. government use of these options has not satisfied the Guam, Hawaii, or CNMI governments. Financial compensation provided to Guam, Hawaii, and the CNMI to address migration impact has been far less than the impact estimated by the three area governments and submitted to Interior. Since the Compact with the FSM and the RMI was enacted through 2001, the U.S. government has provided approximately $41 million in impact compensation to Guam, compared with the $180 million in increased costs the territory has estimated it has incurred from 1986 through 2000 (i.e., about 23 percent of total estimated impact costs). The Commonwealth of the Northern Mariana Islands has received $3.8 million in compensation from 1986-2001 compared with $105 million to $133 million in estimated costs from 1986- 2000 (less than 4 percent of total costs). While Hawaii has estimated $86 million in Compact impact costs, the state has received no compensation. The Compact’s enabling legislation does not require compensation. Rather, it authorizes appropriations to cover certain impact costs and notes that the Congress will act “sympathetically and expeditiously” to redress adverse consequences. An OIA official noted that the reality of budget constraints has prevented compensation to the extent that impact has been incurred. However, government officials from Guam, Hawaii, and the CNMI have expressed frustration that these island areas are bearing the costs of a federal decision to allow unrestricted migration through the Compact and believe that compensation levels have been inadequate. Compensation funding received by Guam has not, for the most part, been used in the areas of health and education—the areas that have experienced the greatest migrant impact. As a result of U.S. legislative requirements, the government of Guam was directed to use a majority of its impact compensation funding for capital improvement projects. For fiscal years 1996 through 2001 (when more than $35 million was specifically provided to Guam through legislative action rather than through OIA’s technical assistance account), we determined that Guam has spent or plans to spend almost $10 million for road paving, almost $8 million for water projects, more than $4 million for equipment for Guam Memorial Hospital, and more than $4 million for gyms. Prior to 1996, Guam received about $850,000 received from OIA’s technical assistance account for a Compact Impact Information and Education Program, established by the government of Guam to “develop and implement information, educational, and organizational activities to assist FSM and RMI citizens in receiving the support and assistance cultural integrity, integration, equity, and productivity.” This program is no longer operating. In addition to funding for Guam Memorial Hospital listed above, Guam has also used earlier impact funding in ways that directly addressed Compact migrant impact costs. For example, Guam used $600,000 received from OIA’s technical assistance account in 1994 to partially reimburse expenditures made by the Department of Public Health for assistance provided to FSM and RMI citizens. According to a CNMI government official, the CNMI has used, or is planning to use, most of its compensation funding for agencies affected by Compact migration impact: the Department of Public Health, the Department of Public Safety, and the Public School System. The governments of Guam and the CNMI believe that restricting the use of compensation funding to capital improvement projects does not target the money where it could be best used. A second option available to address Compact impact—limiting the length of stay in some U.S. areas of certain Compact migrants—was implemented 14 years after the Compact went into effect, and its enforcement and impact may be limited, according to the INS. The Compact states that nondiscriminatory limitations can be placed on the rights of Compact migrants to establish “habitual residence” in U.S. territories or possessions. However, because the CNMI already controls its own immigration and Hawaii is not a territory or possession, Guam is the only potential beneficiary of such limitations for all practical purposes.Habitual residence limitations for Guam, as well as certain other limitations on all aliens living in the United States, are the only means of regulating the ability of Compact migrants to stay in U.S. areas indefinitely. In its annual impact reports, OIA’s one consistent recommendation for reducing impact has been to implement habitual residence restrictions. Immigration legislation passed in 1996 states that not later than 6 months after the date of enactment of the act, the Commissioner of Immigration and Naturalization shall issue regulations governing rights of “habitual residence” in the United States under the terms of the Compacts. These regulations were not implemented until 4 years later, in September 2000, and define habitual residents, in part, as those FAS migrants who have been in a U.S. territory (i.e., Guam) for a total of 365 cumulative (i.e., not consecutive) days. The regulations provide that, in part, habitual residents are subject to removal if they are not, and have not been, self-supporting for a period exceeding 60 consecutive days for reasons other than a lawful strike or other labor dispute involving work stoppage, or have received unauthorized public benefits by fraud or willful misrepresentation. “Self- supporting” is defined, in part, as having a lawful occupation of a current and continuing nature that provides 40 hours of gainful employment each week, or (if unable to meet the 40-hour employment requirement) having lawfully derived funds that meet or exceed 100 percent of the official poverty guidelines for Hawaii. Officials from INS believe that these regulations will be difficult to enforce and so will have little impact in Guam. They have stated that this is primarily due to the fact that Compact migrants, like all other nonimmigrants, are not tracked once they arrive in a U.S. area because the INS cannot devote the resources necessary to do so. There is no way of knowing where a Compact migrant is living unless, for example, the migrant is arrested for a crime and reported to the INS. An INS official in Guam reported that the INS has taken no specific action there to enforce the habitual residence regulations. A Guam government official said that while the issuance of the regulations puts the INS more “on track,” there is still the problem of tracking migrants, and the INS will not be able to deal with tracking habitual residents in Guam. This official also expressed the view that the regulatory language that the INS will share migrant entry data with Guam, which can assist in collaborative tracking and enforcement efforts, on an “as-needed basis” is too limited. Finally, this official noted that even if a habitual resident is deported from Guam, this person can reenter under a different name (and will thus avoid detection by the INS as a migrant that should be denied entry). The CNMI controls its own immigration and so has had the option to unilaterally impose nondiscriminatory limitations on habitual residence since the Compacts with the FAS countries were implemented. The Governor of the CNMI told us that it would be very hard for the CNMI to take such action, for social and cultural reasons. Nonetheless, the CNMI is now studying the issue and considering whether to establish limitations on habitual residents. A Department of the Interior official told us that OIA told the CNMI to wait to issue habitual residence limitations until the INS had issued its regulations. This way, the CNMI could draft its limitations in conformity with those of the INS, resulting in a single policy regarding habitual residence. One tool available to the U.S. government in determining whether and how to address the effect of the Compacts is the impact reports required in Compact enabling legislation. These reports are to discuss, among other things, the adverse consequences of migration under the Compacts.However, OIA, despite the legal requirement for Interior to report annually to the Congress on impact and provide recommendations to address this impact, has only prepared reports in 1989 and 1996-2001. OIA officials told us that no reports were prepared between 1989 and 1996 because the Congress was not interested in this issue. OIA also noted that while impact reports were not submitted to the Congress, mention was made of Compact impact as part of OIA’s annual appropriations hearing. OIA has based its assessments of impact on the estimates prepared by the governments of Guam, Hawaii, and the CNMI. It has also based its reports on the previously discussed FAS migrant population surveys it has funded using technical assistance from the U.S. Bureau of the Census. These reports have found that Compact impact has been substantial and have included general recommendations for addressing impact such as proposing the establishment of nondiscriminatory limitations on the long- term residency of Compact migrants. In November 2000, amendments to the Organic Act of Guam (P.L. 106-504) changed the party responsible for reporting on impact from Interior to the governors of Hawaii and U.S. territories or commonwealths. OIA has also not ensured that the three U.S. islands areas use, to the extent possible, uniform approaches to calculate impact. Interior issued guidelines in 1994 on how to calculate impact costs and which costs to include. These guidelines were issued in response to a 1993 report by the Department of the Interior’s Office of the Inspector General that recommended that OIA develop and disseminate guidelines and procedures for use in determining Guam’s Compact impact costs. These guidelines, which were drafted in the context of reviewing a Guam impact assessment for 1993, addressed key areas such as health and education. The guidelines also stated that a “baseline” population of migrants who were on Guam prior to the Compact (and whose impact thus, in Interior’s view, cannot be easily attributed to the Compact) should be subtracted from Guam’s impact estimate. Guam subsequently took action to implement OIA’s guidelines. However, while an OIA official reported that he submitted these guidelines to the CNMI, officials from the CNMI could not confirm ever receiving such guidelines and stated that their impact assessments have not been based on OIA guidelines. An Hawaii State official said that while the state did not receive written guidelines from OIA, verbal discussions were held regarding issues such as using per student costs for education estimates. Further, OIA has not provided consistent review of impact estimates submitted by the three U.S. island area governments. For example, while OIA has instructed Guam and the CNMI in its annual reports and elsewhere to remove pre-Compact migrants from impact estimates (i.e., subtract a “baseline”), no such guidance has ever been provided to Hawaii in OIA’s annual reports, despite the fact that Hawaii has estimated costs for all FAS migrants living in the state and had a pre-Compact migrant population that was roughly comparable to that of Guam. In addition, OIA’s 1999 impact report mentioned Hawaii’s lost revenues resulting from the fact that FAS college and university students pay resident tuition, after having noted in its 1994 guidelines that higher education costs could not be justified for Guam. OIA has also not addressed the fact that many of Guam’s impact estimates do not include Palauans (while estimates for the other two areas do include this group of migrants) and that Hawaii no longer includes public safety costs in its assessments. As a result, all three areas have included different areas of impact and have defined the impact population differently and impact estimates, though providing valuable data, do not easily allow comparisons across U.S. island areas to determine actual relative impact. OIA itself noted in its impact report for 2000 that determining the costs of providing services to migrants has become increasingly difficult, in part, because “here is no consistent methodology among U.S. areas for measuring the cost of providing services to migrants” and “he type of impact and the concerns vary among the reporting areas.” Changes in the level of future U.S. economic assistance may alter the rate of migration. For example, significant reductions in aid to the FSM and the RMI that reduce government employment would be expected to spur migration. On the other hand, targeting future U.S. assistance to the FSM and the RMI for education and health purposes could reduce some of the motivation to migrate (although migration will continue as long as employment opportunities in both countries remain limited). Further, improvements in migrant health and education status would be expected to reduce migrant impact in U.S. destinations. Thus, changes in future U.S. assistance could have repercussions for the FSM and the RMI, as well as any U.S. location receiving migrants. Additionally, changes in Compact provisions, such as requiring health screening, could reduce the impact of migrants on U.S. areas, though government officials from the two Pacific Island nations do not view migration provisions as subject to renegotiation. To date, no formal demographic or economic analysis of changes in economic assistance has been completed. However, officials in the United States, the FSM, and the RMI draw on their past experience with Compact migration to project how proposed changes in Compact assistance could affect migration levels. Additionally, they also have views on how changes in health and education may affect the impact of migrants on U.S. destinations. Officials in U.S. island areas seeking to reduce adverse impact advocate certain changes in Compact assistance and in migration provisions. Past reductions in U.S. assistance appeared to promote migration, and future reductions could be expected to have a similar impact. Reductions in U.S. assistance to the FSM and the RMI occurred twice during the Compact. The second reduction occurred in October 1996 and lowered U.S. Compact funds for government operations by 17 percent in the FSM and 9 percent in the RMI. Both countries reduced their public sectors: FSM government employment fell by 24 percent between 1995 and 1997, while the RMI reduced government employment by 36 percent between 1995 and 2000. Reduced Compact funding increased migration, according to FSM and RMI government officials and migrants we met with. Regarding the current negotiations, FSM and RMI officials project increased migration if the United States reduces its assistance to their nations. For example, the FSM analysis of proposed lower U.S. assistance levels concludes: “The economy would be caught in a vicious circle of low growth, compounded by a series of shocks requiring downward adjustment, loss of real incomes, unemployment, and outward migration.” An OIA official noted that a reduction in Compact funding may lead to greater migration, but only very marginally. Officials from the FSM and the RMI believe that migration will tend to favor the U.S. mainland, bypassing Guam, Hawaii, and the CNMI. Efforts to target assistance to the health and education sectors in the FSM and the RMI might reduce migration levels and the impact of migration on the U.S. areas. The U.S. Compact negotiator testified before the Congress in June 2000 that the United States intends to provide future funds in targeted grants that would include the areas of health and education. An emphasis on health spending in both countries, where health services are inadequate, might reduce the number of citizens who go to the three U.S. island areas, where health officials report that some migrants come specifically for medical treatment. For example, after the FSM state of Pohnpei stopped providing hemodialysis (blood purification), FSM citizens showed up more frequently for treatment in the CNMI. Further, improvements in FSM and RMI health care systems that better the health of migrants and improve access to quality health care might also reduce migration impact on U.S. areas. The State Department Compact negotiator has said that such targeted spending would reduce incentives to migrate and would ensure that those who do migrate are in a better position to contribute to their new communities. According to Hawaii health officials, Compact spending should go to ensure that the FSM and the RMI can offer competent basic primary health care, specifically for immunizations and prenatal care, and to address tuberculosis, Hansen’s Disease, hepatitis, and diabetes in an effort to reduce the incidence of these health problems in Hawaii. Regarding future U.S. Compact health care assistance, Guam health officials said health care funds should be spent on prenatal care, communicable, and vaccine-preventable diseases with the stipulation that it be provided with strict guidelines for its use. FSM officials report that migration might slow if FSM health care equaled that of the U.S. mainland. RMI officials believe that any increases in health spending would discourage migration as they noted that the better health care available in the United States is a motivation for migration. Increased Compact spending on education might reduce migrant impact costs. Hawaii education officials noted that FAS teachers often have limited credentials and that it takes migrant children 5-7 years to attain average literacy. Similarly, University of Hawaii faculty reported that FAS students required remedial course work. The implication of this is that better FAS education would enable FAS students to perform better in U.S. schools. Similarly, Guam believes that increased spending on education in the FAS would likely reduce migration demands on Guam’s education system. FSM officials believe that increased spending on education could reduce the migration of whole families and could improve economic opportunities in the FSM. However, they also reported that increased education funding would increase the number of people migrating to attend U.S. colleges. Similarly, RMI officials believe that increased education spending would discourage migration. One RMI official doubted that the RMI education system could be quickly fixed, which leaves migration as the best option. The U.S. Compact negotiator testified before the Congress in June 2000 that the United States intends to seek changes in Compact migration provisions to reduce the adverse impact on the United States. Two possible changes have been mentioned by the U.S. negotiator: establishing a system of health screening, to ensure that contagious individuals receive treatment in order to protect public health; and requiring a passport, in order to better screen out criminals, determine admissibility for entry to the United States, and facilitate entry for FAS travelers. Health officials in Hawaii believe that migrants should be screened prior to leaving the FSM and the RMI and only allowed to enter the United States if they are noninfectious. In Hawaii, public health nurses reiterated that many of the problems that have occurred in Hawaii are associated with treatable, but communicable, diseases. The FSM government does not believe that health screening is its responsibility or that it is practical. However, FSM officials believe that criminal migrants hurt the standing of all migrants, and the FSM is considering requiring a passport before leaving the FSM. RMI officials believe that health screening would be helpful to both the RMI and the United States; therefore, they support “minimal” screening for health problems. Regarding a passport requirement, INS officials support this option, although they have pointed out that a key issue would be to ensure that passports are secure. The possibility that the United States might seek restrictions on migration is of concern to both countries. The FSM has responded to the United States that FSM negotiators do not have the authority to discuss migration. Further, the FSM said that any changes made should be to “facilitate migration.” RMI government officials told us that the migration benefits are not up for renegotiation and that they are very important for the country, providing a “critical safety valve.” FAS migration has clearly had a significant impact on Guam, Hawaii, and the CNMI and has required government services in key areas. Compact migrants have required local expenditures in areas such as health and education and, further, have particularly affected the budgetary resources of Guam and the CNMI—U.S. island locations that have relatively small populations and budgets, and economies that have recently suffered economic setbacks. The budgetary impact on Hawaii is relatively smaller but can be expected to grow as Hawaii begins to absorb health care costs that were once covered by the U.S. government. Public health problems are also an important concern for all three U.S. island areas. Because the Compact allows FAS migrants who have limited financial means and ability to pay for health care to enter the United States with few restrictions, U.S. island areas are absorbing much of the health care costs of this poor population. Further, Guam, Hawaii, and the CNMI can be expected to continue to experience Compact impact as long as current poor economic conditions persist in the FSM and the RMI. Targeting future U.S. assistance to the FSM and the RMI for education and health purposes could reduce some of the motivation to migrate and improvements in migrant health and education status might be expected to reduce migrant impact in U.S. destinations. We recommend that the Secretary of State direct the U.S. Compact Negotiator to consider how to target future health and education funds provided to the FSM and the RMI in ways that also effectively address adverse migration impact problems identified by Guam, Hawaii, and the CNMI. For example, the U.S. Negotiator could consider whether a specified portion of the health sector assistance should be targeted at treating and preventing the communicable diseases in the FSM and the RMI that are a public health concern in Guam, Hawaii, and the CNMI. We received comments from the Department of the Interior, the Department of State, and the Immigration and Naturalization Service, as well as to the governments of the FSM, the RMI, Guam, Hawaii, and the CNMI. These agencies and governments generally agreed with our findings, but each had concerns regarding the scope and content of various issues addressed in the report. Of those who addressed our recommendation, State agreed with us, Guam and the CNMI stated that the recommendation should address the lack of employment in the Pacific Island nations, Hawaii proposed that health and education funding be provided only under strict grant conditions, and the FSM felt that the recommendation was unnecessary. Their comments and our responses can be found in appendixes III through IX. We are sending copies of this report to the Secretary of the Interior, the Secretary of State, the Commissioner of the Immigration and Naturalization Service, the President of the Federated States of Micronesia, the President of the Republic of the Marshall Islands, the President of the Republic of Palau, the Governor of Guam, the Governor of the state of Hawaii, the Governor of the Commonwealth of the Northern Mariana Islands and to interested congressional committees. We will also make copies available to other interested parties upon request. If you or your staff have any questions regarding this report, please call me at (202) 512-4128. Other GAO contacts and staff acknowledgments are listed in appendix X. Based on a request from the Chairman of the House Committee on Resources; the Ranking Minority Member of the House Committee on International Relations; the Chairman of the House Committee on International Relations, Subcommittee on East Asia and the Pacific; and Congressman Doug Bereuter, we (1) identified migration under the Compact (migrant destinations, population size, and characteristics); (2) assessed the impact of this migration on U.S. island areas and the sending nations; (3) determined the use of available options to address impact on U.S. island areas; and (4) explored ways in which future changes in Compact provisions and assistance levels might affect migration levels and impact. For our first objective, we reviewed data contained in surveys funded by the Department of the Interior’s Office of Insular Affairs (OIA) using assistance from a U.S. Bureau of the Census official. These surveys captured the number and characteristics of migrants from the Federated States of Micronesia (FSM), the Republic of the Marshall Islands (RMI), and the Republic of Palau in Guam (surveys for 1992 and 1997), Hawaii (survey for 1997), and the Commonwealth of the Northern Mariana Islands (CNMI) (surveys for 1993 and 1998). (Our review includes data for the Republic of Palau for our first two objectives because Palauan information regarding impact cannot be disaggregated from the other two Pacific Island nations. Further, the Compact of Free Association with Palau also allows for compensation for costs incurred by U.S. areas as a result of Palauan migrants.) We reviewed these surveys to identify the number of migrants that were living in these three U.S. island areas at the time the surveys were conducted, including migrants who moved before and after implementation of the Compacts of Free Association with the FSM and the RMI in 1986 and the Compact of Free Association with Palau in 1994. We also reviewed these data to identify key characteristics of these migrants such as their reasons for migrating, age and education levels, employment situation, and poverty status. We focused our assessment on the most recent survey data. We also reviewed additional documents, such as the 1995 CNMI census and a Guam assessment of Micronesians on the island in 1997, that contain data on migrants. The OIA surveys are the most recent and comprehensive data available that identify and describe FSM, RMI, and Palauan migrants in Guam, Hawaii, and the CNMI. We did not attempt to verify the accuracy of any of these data. The survey data appear to be an undercount of migrant population totals (see app. II). We discussed the strengths and weaknesses of the methodology used to collect the survey data with the U.S. Bureau of the Census official who was involved with the surveys, as well as with U.S. and Guam government and academic officials familiar with the methodology; all agreed that the survey methodology is subject to undercounting. Further, we found several specific instances during our review that indicated that the data may indeed be an undercount. In addition, the survey data we are using are primarily from 1997 and 1998, and the level of migration to these three U.S. island areas since that time is unknown. More current data from the U.S. 2000 census conducted by the U.S. Bureau of the Census that will identify the number and some characteristics of FSM, RMI, and Palauan migrants living in the three U.S. island locations were unavailable; these data, along with data on Pacific islander migrants on the U.S. mainland, are due to be released in late 2001. We began our work on this objective by reviewing the language contained in the Compacts’ enabling legislation addressing migrant impact. To then determine the amount of total Compact impact estimated by the governments of Guam, Hawaii, and the CNMI, we collected the reviewed impact estimates prepared by each of these three locations. In most cases, estimates were either prepared specifically for a certain year and provided detailed information for particular areas, such as the health and education sectors, or were prorated for a certain year based on other years’ estimates when detailed calculations for that year were not prepared. In a few cases, estimates were prepared that covered multiple years and were not tied to a specific year. Because of this, we were unable to convert the impact estimate totals into constant dollars. We identified impact estimate figures and all available supporting data for 1986-2000 for Guam, Hawaii, and the CNMI. We reviewed the estimates and the methodologies used to derive them with government officials who work in the areas of health, education, and public safety in all three locations, as well as U.S. government officials from OIA and the Department of State. We discussed impact with additional parties, including the governors of Guam and the CNMI and their staff, and staff from the governor’s office in Hawaii; FSM and RMI migrant community representatives; and private sector officials. We reviewed fiscal year 2000 budget figures for Guam, Hawaii, and the CNMI to identify the proportion of those figures represented by estimated impact amounts for that year. We also reviewed OIA’s annual impact reports for 1989 and 1996-2001 and the assessments of the three locations’ impact estimates contained therein. To review the impact of migration to U.S. areas on the FSM and the RMI, we held discussions with senior FSM and RMI government officials and reviewed FSM and RMI census documents. To review what actions have been taken that address impact, we reviewed the Compacts’ enabling legislation language regarding the authorization of appropriations to cover impact costs, as well as the Compacts’ language on the ability to place nondiscriminatory limitations on the length of stay in U.S. territories of certain migrants from the Freely Associated States (the FSM, the RMI, and Palau). We then identified all OIA technical assistance funding and specific legislative appropriations provided to Guam, Hawaii, and the CNMI to cover estimated impact costs. We discussed the amounts and how this funding has been spent with OIA, Guam, and CNMI government officials. We visited select capital improvement projects (gyms, roads, water projects, etc.) in Guam that have been supported with impact compensation funds. To review the implementation of nondiscriminatory limitations, we reviewed the September 2000 regulations on this issue put forth by the U.S. Immigration and Naturalization Service (INS). We then discussed the terms, enforceability, and potential impact of these regulations with INS officials in Washington, D.C., and Guam, as well as with Guam government officials. We also discussed the possibility of such limitations with officials from the CNMI government. To assess what actions have been taken by OIA to identify impact and communicate this impact to the Congress, we first reviewed the Compacts’ impact reporting requirement contained in the Compacts’ enabling legislation. We then collected and reviewed all available OIA impact reports, which included reports for 1989 and 1996-2001, as well as OIA’s 1994 guidelines for preparing impact estimates. We discussed the process for preparing, as well as the substance of, the reports and guidelines with the responsible OIA official. Such discussions covered issues such as the reasons for not issuing annual reports for each year following Compact implementation and the need to subtract a “baseline” population from impact estimates. We also discussed OIA’s reports with government officials from Guam, Hawaii, and the CNMI. To review how changes in the Compact’s economic assistance might affect migration, we reviewed development planning documents prepared by the FSM and the RMI in 2000 regarding population and migration policies. Further, we solicited views regarding possible changes in the Compact from senior officials from the FSM, the RMI, Guam, Hawaii, the CNMI, OIA, INS, and State as well educators, health professionals, business representatives, and the migrant communities. We conducted our work from October 2000 through June 2001 in accordance with generally accepted government auditing standards. The methodology used in the Department of the Interior’s Office of Insular Affairs (OIA) surveys to count the number and characteristics of Micronesians living in U.S. island areas, the “snowball” approach, results in an undercount of the actual migrant population from the Freely Associated States (FAS). While the OIA surveys captured many of the FAS migrants in Guam, Hawaii and the Commonwealth of the Northern Mariana Islands (CNMI), our analysis and review of the data found several illustrations of how the surveys likely undercounted the actual number of FAS migrants in U.S. island destinations. In order to count and characterize Micronesian migrants in Guam, Hawaii and the CNMI between the 1990 and 2000 U.S. population censuses, OIA utilized the services of U.S. Census Bureau staff to survey U.S island area Micronesian migrant populations in 1992, 1993, 1997, and 1998. The Census official leading the survey used a survey tool referred to as the “snowball” method of surveying special populations. The OIA survey administrator selected and trained FAS migrants, who had received at least high school diplomas and had passed special tests, to serve as “enumerators” to collect data on other migrants in each U.S. area. In Guam, Hawaii, and the CNMI, enumerators from each of the FAS countries identified and interviewed all migrants they knew of from their own countries, then asked these interviewees to identify all migrants from their home country they knew of living in the area, continuing on in this manner until no “new” migrants were identified. For example, Marshallese enumerators only interviewed Marshallese migrants. While the goal of the OIA surveys was to identify 100 percent of the migrants from each FAS nation living in the three U.S. areas, the Census Bureau official involved with the surveys acknowledged that a snowball count inevitably yields less than 100 percent of the actual population. Of note, the snowball data represent a “snapshot” of the FAS migrant communities living in Guam and Hawaii and the CNMI at the time of the surveys. The data do not represent all FAS migrants who ever lived in a U.S. island area, as some of these migrants may have moved elsewhere by the time of the survey and may have different characteristics from migrants who remained in U.S. areas. Experts whom we interviewed agree that this snowball methodology is the most appropriate strategy to enumerate FAS migrants living in U.S. areas in the Pacific. The snowball methodology generally yields higher quality information than a traditional census, and is reportedly less expensive. The advantages of the snowball methodology include: (1) distinguishing FAS subgroups from the larger population (as well as from one another); (2) providing the ability to shape the survey instrument to obtain desired information; and (3) minimizing the extent to which ethnic/racial bias and language barriers undermined the quality of the survey since the migrant enumerators were of the same ethnicity as the migrants they interviewed. However, the snowball method misses some individuals who are not connected or networked into the mainstream migrant communities. Thus, it may miss some of those that are living in remote locations/islands, or are in areas with political, economic, or racial tensions. Fears on Guam and Hawaii about being deported may have led some migrants either to not participate at all or to not fully disclose their personal information, according to interviews with FAS migrants in Guam and Hawaii. The snowball approach may also miss transient migrants as well as some who have assimilated into the U.S. areas. Based on our review of island government administrative data and interviews with migrants, we have found the following examples that suggest the OIA’s survey data appear to undercount of the actual population of FAS migrants in U.S. areas: While OIA’s 1998 census identified 92 Marshallese and their children living in the CNMI, a Marshallese migrant we met with in the CNMI in 2000 showed us a list of Marshallese families, which she explained totaled 260 persons. While there is a 2-year difference in the data, the population of RMI migrants is very stable, with few people coming or going in recent years, according to this Marshallese community representative. One of the Marshallese migrants living in Hawaii who assisted with OIA’s 1997 survey of FAS migrants estimated that the survey missed around 15 percent of the Marshallese population living on the Hawaiian island of Oahu. The OIA surveys report a smaller number of FAS students enrolled in public schools (kindergarten through grade 12) in each of the three U.S. areas than do the administrative data provided to us by each school district. An analysis of the data shows that: (1) in 1997, OIA counted only 1,205 ethnic FAS students in Guam, whereas Guam counted 3,009 ethnic FAS students; (2) in 1998, OIA counted only 422 FAS-born students in the CNMI, whereas the CNMI counted 575 FAS-born students; and (3) in 1997, OIA counted only 1,054 FAS-born students in Hawaii, whereas Hawaii’s Department of Education counted 1,283 FAS-born students in their English as a Second Language program alone. The possibility of an undercount on Guam is also illustrated by the discrepancies in the numbers of Palauans living on that island in 1997. An Ernst & Young report issued on the impact of Micronesian migration to Guam that compared the OIA 1997 survey with the 1995 census of Palauans on Guam, found 1,716 fewer Palauans in Guam in 1997 than in 1995 (dropping from 2,276 to 560 persons). Moreover, the report noted that confusion in the Palauan community as to why a survey was being conducted in 1997 (since a Palauan-only census was administered just 2 years prior in the form of the 1995 Census of Palauans on Guam) may have led to an undercount of Palauans on Guam in the OIA 1997 survey. The following are GAO’s comments on the Department of the Interior’s letter dated September 7, 2001. 1. The report number has been changed from GAO-01-1028 to GAO-02-40. 2. We do not believe that it has become more difficult each year to measure the impact of Compact migration, and in fact may have become an easier task. For example, a Guam official told us that when the Compact was implemented, the territory could not quantify impact from available data; since that time, Guam agencies have collected the necessary data on FAS migrants. Further, the CNMI government has reported that it is now taking action to better review data in order to provide more specific information regarding FAS migrants and their use of public services. However, it is worth noting that over time the Department of the Interior will need to define the eligible Compact impact population as U.S.-born children of FAS migrants begin to have children of their own. The following are GAO’s comments on the Department of State’s letter dated August 22, 2001. 1. The report recognizes that machine-readable passports would facilitate entry of travelers, determinations of admissibility, and links to criminal databases. It is worth pointing out that criminals have been identified upon entry into the United States using other means. For example, the U.S. embassy in the RMI is providing the INS with a list of convicted RMI felons for determining admissibility at the port of entry. 2. This report examined Hawaii, Guam, and the CNMI because these destinations received the initial influx of Compact migrants, comprehensive surveys of these migrants had been conducted in each location, and impact compensation is authorized for each of the three locations. As acknowledged by the Department of State in its letter, we intend to undertake a review of migration to the mainland following publication of this report. 3. We maintain that our comments on p. 8 of the report are accurate: While FAS migrants are classified as nonimmigrants, their behavior is often like that of immigrants in that they can stay indefinitely in the United States with few restrictions. Further, the habitual residence restrictions cited in the Department of State letter only apply to Guam. The following are GAO’s comments on the letter from the government of the Federated States of Micronesia dated August 31, 2001. 1. We agree that the Compact’s migration provisions strengthen ties between the United States and the FSM and the RMI. As we discussed on p. 8 of this report, at the time of the Compact negotiations, the Compact negotiator stated that the migration rights were to strengthen ties between the United States and the Freely Associated States (FAS). The current Compact negotiator has reiterated this point, referring to the migration rights not only as an important "safety valve" for the FAS population, but as the "glue" between the nations. The negotiator further stated that because the children of FAS migrants born in the United States are U.S. citizens, ties between the United States and the FAS are further deepened. 2. As we reported on p. 16 of this report, the governments of Guam, Hawaii, and the CNMI all acknowledge that FAS migrants have had positive impacts such as contributing to the tax base and filling employment needs. However, the Compact enabling legislation specifically requires reports only on the adverse impact of the Compact. As noted on p. 16 of this report, the CNMI's impact estimate for 1996 quantified $3.6 million in positive benefits from the taxes paid by FAS migrants, compared with their reported cost of $11 million that year. The Congress authorized compensation in the Compact's enabling legislation for U.S. island areas that may have experienced increased demands on their educational and social services by Compact migrants, but did not include compensation for other impact costs, such as infrastructure. Consequently, the U.S. island areas are not submitting claims for total costs in their impact estimates. In addition, available data on investment and savings reported by FAS migrants in Department of the Interior surveys show that few migrants invest and save. For example, of 2,053 Compact FSM households surveyed, only 15 reported any interest, dividend, or net rental income in 1997 in Guam and Hawaii or 1998 in the CNMI. Reviewing the impact of Compact expenditures on U.S. companies is not within the scope of this review. 3. We believe our description of the number of FAS migrants and the income levels of these households is neither damning nor inaccurate. Surveys have identified thousands--about 14,000--of Compact FAS migrants in Guam, Hawaii, and the CNMI. FAS migrants reported their income in surveys conducted in Guam, Hawaii, and the CNMI. In total, about 61 percent of FAS migrants lived in households with income levels below the poverty level, based on the U.S. poverty definition. 4. We do not suggest that FSM migrants “choose” to live in poverty, but report that their employment has been primarily in private sector jobs requiring few skills and paying low wages. 5. Our report does not contain data on the number of FAS migrants on the U.S. mainland. As reported on p. 11 of this report, according to OIA surveys, FAS migrants accounted for about 5 percent of Guam’s total population in 1997 and around 4 percent of the CNMI’s total population in 1998. These migrants accounted for 0.5 percent of Hawaii’s population in 1997. As noted on p. 11, we believe that these figures underestimated the number of FAS migrants in these three U.S. locations. 6. Issues regarding U.S. employer recruitment were not raised by island government officials or FAS migrants concerning Guam, Hawaii, or the CNMI. We recognize that FSM officials have raised this issue regarding U.S. mainland employers who are recruiting FSM citizens for work. 7. The number of FAS citizens who return to the islands after schooling or a period of employment is not known. The FSM and RMI governments were not able to provide such data, although one FSM government official estimated that perhaps one-fourth of migrants return. 8. When reporting on OIA migrant survey information, we identified the FSM, the RMI, and Palau separately where notable data differences existed. For example, see table 1 on p. 11 and table 2 on p. 14. With respect to Compact impact estimates, we relied upon data provided to us by Guam, Hawaii, and the CNMI. These data often combined all FAS migrants, making it impossible for us to report on the impact of the three FAS nations separately. Guam, Hawaii, and the CNMI are eligible to receive Compact impact compensation for the impact of migrants from all three FAS nations. 9. The issue of FAS citizens who enlisted in the U.S. armed forces will be addressed in a separate GAO report on Compact defense and security issues. This report will be issued before the end of 2001. 10. As we reported on p. 13, the pursuit of educational opportunities was one of the motivations for migration, according to migrants and FAS government officials. OIA migrant surveys for Guam, Hawaii, and the CNMI did not ask FAS migrants whether education was a reason for migration, and thus contain no data on this issue. 11. The report’s recommendation was not drafted with the intent to end migration. The recommendation has two purposes. In addition to providing an option that could reduce some of the incentives to migrate, the recommendation also recognizes that improvements in the health and education systems in FAS nations could reduce the impact of migration on the receiving areas. 12. Consideration of alternatives available to the U.S. government to increase opportunities and improve conditions in FAS nations was beyond the scope of this report. 13. Our report does not contend that FAS migrants have introduced (i.e., provided the first case of) any communicable disease into the United States. However, Hawaii has repeatedly emphasized public health concerns regarding FAS migrants. According to a report prepared by Hawaii’s Department of Health and included in Hawaii’s January 31, 2001 Compact impact report, the FSM has the highest prevalence of Hansen’s Disease (leprosy) in the world, at 35 cases per 10,000 people. For 1992 through 1999, 151 cases of this disease were detected among the Marshall Islanders and Micronesians in Hawaii. Hawaii also identified cases of tuberculosis, pertussis, and hepatitis A occurring within the FAS population communities in the state. In addition, Guam and CNMI health officials also raised public health concerns regarding FAS migrants. 14. We believe this report is objective and fair. It reports on the migration experience under the Compact of Free Association relying on the best available data. The following are GAO’s comments on the letter from the government of the Republic of the Marshall Islands dated September 4, 2001. 1. This GAO report is one in a series of reviews of U.S. relations with the FSM and the RMI under the Compact of Free Association. Previously, we have published an assessment of the use, effectiveness, and accountability of U.S. Compact economic assistance. In addition to this migration report, forthcoming reviews cover the use, effectiveness, and accountability of U.S. domestic programs extended to both nations, as well as defense and security relations. Taken together, these reports will illustrate the larger context of the free association relationship between the three countries. 2. The determination of poverty levels is based on the U.S. nationwide standard as established by the U.S. Bureau of the Census. These levels are adjusted annually for family size. The measure of poverty is required for statistical purposes by the U.S. Office of Management and Budget in Statistical Policy Directive No. 14. We have added a footnote on p. 14, stating that poverty levels are based on the single U.S. standard discussed previously. Regarding education, our report does not state that FAS migrants are “uneducated.” Instead, we report that migrants have not been highly educated, based on OIA migrant survey data. According to U.S., Guam, and CNMI government reports, FAS migrants have lower educational levels than the overall population in Guam, Hawaii, and the CNMI. For example, 55 percent of Compact migrants over the age of 25 had high school degrees in Hawaii in 1997, while 84 percent of the total Hawaiian population over the age of 25 had a high school degree. 3. We have added text on p. 23 to state that the RMI government does not have an official policy regarding migration. 4. We have modified the title of the report in recognition that “foreign relations” is a more appropriate way to classify the migration relationship between the United States and FAS nations. We continue to believe that the Compact economic and program assistance is most accurately referred to as “foreign assistance.” 5. We made the suggested change. 6. We made the suggested footnote change. 7. We made the suggested footnote change. 8. We did not make this change. The report states that migration is viewed as a “safety valve” by government officials; it does not state that this view constitutes a matter of official government policy. 9. While we did not alter this particular sentence, on p. 14 of the report we added text to note that, in Hawaii, a higher percentage of Compact RMI migrants reported that they migrated to Hawaii for medical reasons (10 percent) than reported moving for employment (7 percent). However, we note that 43 percent of Compact Marshallese surveyed chose “other” as their reason for migrating. 10. We have made some of the suggested changes. The following are GAO’s comments on the letter from the government of Guam dated August 30, 2001. 1. The data we present on p. 17 of the report regarding the impact of FAS migrants on Guam, Hawaii, and the CNMI include public safety costs. We acknowledge that FAS migrants create impact on the criminal justice system in Guam. For example, in Guam, where FAS migrants make up about 5 percent of the population, 12 percent of the cost of the corrections system was attributed to FAS migrants in fiscal year 2000. FAS migrants represented 26 percent of all convictions in fiscal year 1999/2000 in Guam. We did not separately discuss this area of impact in our report because it is smaller than the impact reported on the health and education systems. For example, for fiscal year 2000 public safety costs estimated by Guam was 6 percent of the total impact amount, compared with 54 percent for education and 40 percent for health and welfare. Reviewing the impact immigration to Guam from countries other than FAS nations was beyond the scope of this report. 2. The issues of economic development and employment opportunities in the FSM and the RMI have been addressed in a prior GAO report. In this review, we reported that the considerable funds provided to the FSM and the RMI under the Compact had resulted in little economic development. See Foreign Assistance: U.S. Funds to Two Micronesian Nations Had Little Impact on Economic Development (GAO/NSIAD-00-216, Sept. 22, 2000). We have not assessed the extent to which long-term stability in FAS nations can be created in the future through economic development and employment opportunities. 3. We maintain that the report’s recommendation provides an option that could reduce some of the incentives to migrate. For example, targeted investments for dialysis treatments would allow some FAS citizens to remain at home instead of moving to U.S. locations. Further, the recommendation also recognizes that improvements in the health and education systems in FAS nations could reduce the impact of migration on the receiving areas. For example, improved immunization in FAS nations could reduce the public health concerns currently voiced by Guam, Hawaii, and the CNMI with regard to FAS migrants. 4. OIA migrant survey data have shown that employment has been a key reason for FAS migration to Guam, Hawaii, and the CNMI since the Compacts were implemented. However, we noted on p. 15 of our report that “…Guam government officials told us that as Guam’s unemployment rate has reached about 15 percent in recent years, the demand for FAS workers may have decreased.” This development does not necessarily mean, however, that Guam is now viewed in FAS nations as a location without employment opportunities. For example, an elected Guam official pointed out to us that as difficult as the employment situation may be in Guam, conditions are worse in the FSM state of Chuuk (the poorest state in the FSM and the birthplace of the largest group of FAS migrants in Guam). 5. As Guam rightly states in its comments, our report does not address how the two options available in the Compact and its enabling legislation should be used to address the impact caused by the migration from FAS. As our report points out, however, the Compact’s enabling legislation does not require compensation for impact costs. Rather, it says that the Congress will act “sympathetically and expeditiously” to redress adverse consequences. As such, it is at the Congress’ discretion to compensate for Compact impacts. Similarly, since the INS recently instituted regulations on habitual residents in the territories, we felt it was premature to recommend further regulations until the results of the new regulations can be assessed. 6. We added language on p. 25 of this report noting that the Guam government believes that restricting the use of compensation funding to capital improvement projects does not target the money where it could be best used. The following are GAO’s comments on the letter from the government of the State of Hawaii dated August 31, 2001. 1. We recognize in our report that FAS migrant eligibility for Medicaid is an important issue for the state of Hawaii. As Hawaii noted in its letter, the Congress recently reinstated FAS citizen eligibility for federal housing programs; a similar reinstatement of eligibility for federal Medicaid benefits would require a congressional policy decision. 2. We have not undertaken an analysis to determine whether there might be sufficient potential Compact health sector funds to pay FAS debts to U.S. health care facilities or what the impact of such payments would be on the FAS health care systems. 3. We have discussed the possibility of requiring health screenings with Department of State officials. They informed us that such screenings are not feasible, as the Department does not have sufficient resources to administer such a system in FAS countries. Further, INS officials noted that requiring such screenings would be unfair treatment against FAS migrants, as nonimmigrants are not required to undergo health screenings. While we recognize that requiring health screenings would address a key concern for all three U.S. locations, we believe that the likelihood of the U.S. government implementing such a recommendation is low. 4. Financial compensation is not required under the Compact or its enabling legislation, but can be made at the discretion of the Congress. 5. While we agree that Hawaii has developed its own methodology for calculating impact, we note that Hawaii officials have told us that the state’s Department of Education includes pre-1986 migrants in its Compact impact estimates. 6. On p. 1 of our report, we have better highlighted the fact that Hawaii is a state with a different status than Guam and the CNMI. We then explain in a footnote that we chose the term “U.S. island areas” to refer collectively to a U.S. state, a U.S. territory, and a U.S. commonwealth. We view this term as a neutral, concise reference to the three locations. Further, we list “Guam, Hawaii, and the CNMI” in descending order based upon the number of FAS Compact migrants each location has received. 7. We have retained the footnote as is. We do not believe that Interior’s recalculation of Compact impact estimates for 1 year out of 15 merits inclusion in the body of the report. Further, we are not convinced that OIA’s approach to adjusting the data was valid. 8. We have modified the text to note that Hawaii has estimated $86 million in Compact impact costs, but has received no compensation to date. 9. We agree, and have recommended in a previous report that future Compact economic assistance include specific measures (including grant requirements) that will ensure the effectiveness of, and accountability over, future spending. See Foreign Assistance: U.S. Funds to Two Micronesian Nations Had Little Impact on Economic Development (GAO/NSIAD-00-216, Sept. 22, 2000). The following are GAO’s comments on the letter from the government of the Commonwealth of the Northern Mariana Islands dated August 29, 2001. 1. We added text in footnote 21 on p. 15, stating: “Further, CNMI government officials have reported that it is far more cost effective to hire a FAS citizen, given the immigration filing expenses and other costs associated with hiring other foreign workers.” 2. The report does not suggest that FAS migrant health costs are unimportant in the CNMI. In fact, on p. 18, we noted that health costs were the greatest impact for the CNMI in 2000. 3. We added language on p. 25 of the report, noting that the CNMI government believes that restricting the use of compensation funding to capital improvement projects does not target the money where it could be best used. 4. The issues of economic development and employment opportunities in the FSM and the RMI have been addressed in a prior GAO report. In this review, we reported that the considerable funds provided to the FSM and the RMI under the Compact had resulted in little economic development. See Foreign Assistance: U.S. Funds to Two Micronesian Nations Had Little Impact on Economic Development (GAO/NSIAD-00-216, Sept. 22, 2000). We have not assessed the extent to which long-term stability in FAS nations can be created in the future through economic development and employment opportunities. In addition to those named above, Tama Weinberg, Ron Schwenn, Mary Moutsos, and Rona H. Mendelsohn made key contributions to this report. | Migration from the Federated States of Micronesia, the Republic of the Marshall Islands, and Palau has had a significant impact on Guam, Hawaii, and the Commonwealth of the Northern Mariana Islands (CNMI). The health and education needs of these migrants have particularly affected the budgetary resources of Guam and the CNMI. The budgetary impact on Hawaii is smaller but is expected to grow as Hawaii absorbs health care costs once covered by the U.S. government. Public health is an important concern for all three U.S. island areas. Migrants from the region with limited financial means are able to enter the United States with few restrictions, and U.S. island areas are absorbing much of the health care costs of this population. Furthermore, Guam, Hawaii, and the CNMI can be expected to continue to experience migration as long as weak economic conditions persist in Micronesia and the Marshall Islands. Targeting future U.S. assistance to Micronesia and the Marshall Islands for education and health purposes could reduce some of the motivation to migrate. Improvements in migrant health and education status might be expected to reduce immigration to U.S. destinations. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Air ambulances are an integral part of U.S. emergency medical systems, primarily transporting patients between hospitals, but also providing transport from accident scenes or for organs, medical supplies, and specialty medical teams. Air ambulances may be helicopters or fixed-wing aircraft. Helicopter air ambulances provide on-scene responses and much of the shorter-distance hospital-to-hospital transport, while fixed-wing aircraft are used mainly for longer facility-to-facility transport. (See fig. 1.) Helicopter air ambulances make up about 74 percent of the air ambulance fleet and, unlike fixed-wing aircraft, do not always operate under the direction of air traffic controllers. They also often operate in challenging conditions, flying, for example, at night during inclement weather and using makeshift landing zones at remote sites. My testimony today focuses on the safety of helicopter air ambulance operations. Air ambulance operations can take many different forms but are generally one of two business models—hospital-based or stand-alone. In a hospital- based model, a hospital typically provides the medical services and staff and contracts with an aviation services provider for pilots, mechanics, and aircraft. The aviation services provider also holds the FAA operating certificate. The hospital pays the operator for services supplied. In a stand-alone (independent or community-based) model, an independent operator sets up a base in a community and serves various facilities and localities. Typically, the operator holds the FAA operating certificate and either employs both the medical and flight crews or contracts with an aviation services provider for them. This stand-alone model carries more financial risk for the operator because revenues depend solely on payments for transporting patients. Some operators provide both hospital- based and stand-alone services and may have bases located over wide geographic areas. Regardless of the business model employed, most air ambulances—except government and military aircraft—must operate under rules specified in Part 135 of Title 14 of the Code of Federal Regulations when patients are on board and may operate under rules specified in Part 91 when patients are not present. As a result, different legs of air ambulance missions may be flown under different rules. However, some operators fly under part 135 regardless of whether patients are on board the aircraft. (See fig. 2.) Flight rules under Parts 91 and 135 differ in two key areas—(1) minimum requirements for weather and visibility and (2) rest requirements for pilots. The Part 135 requirements are more stringent. According to industry experts and observers, the air ambulance industry has grown, but data limitations make it difficult to determine by how much. Data for several years on the number of aircraft and number of operating locations are available in a database maintained by the Calspan- University of Buffalo Research Center (CUBRC) in alliance with the Association of Air Medical Services (AAMS). For 2003, the first year for which data are available, AAMS members reported a total of 545 helicopters stationed at 472 bases (airports, hospitals, and helipads). By 2008, the number of helicopters listed in the database had grown to 840, an increase of 54 percent, and the number of bases had grown to 699, an increase of 48 percent (see fig. 3). While a database official said that the data partly reflect the use of a revised criterion that allowed for the inclusion of more helicopters and for improved reporting since the database was established, the increase also reflects actual growth. Data are less readily available on whether this increase number of aircraft translates into an increased number of operating hours. FAA does not collect flight-hour data from air ambulance operators. Unlike scheduled air carriers, which are required to report flight hours, air ambulance operators and other types of on-demand operators regulated under Part 135 are not required to report flight activity data to FAA or the Department of Transportation. Historically, FAA estimated the number of flight hours, using responses to its annual General Aviation and Air Taxi and Avionics (GAATAA) survey. These estimates may not be reliable, however, because the survey is based on a sample of aircraft owners and response rates have historically been low. According to the government and industry officials we interviewed and the literature we reviewed, most of the air ambulance industry’s growth has been in the stand-alone (independent) provider business model. Testimony from industry stakeholders recently submitted to NTSB further identifies the stand-alone provider business model as the current area of industry growth. The growth in the stand-alone provider business model has led to increased competition in some locales. According to the officials we interviewed and others who have studied the industry, the increase in the stand-alone provider business model is linked to the development, mandated in 1997, of a Medicare fee schedule for ambulance transports, which has increased the potential for profit making. This fee schedule was implemented gradually starting in 2002, and since January 2006, 100 percent of payments for air ambulance services have been made under the fee schedule. Because the fee schedule has created the potential for higher and more certain revenues, competition has increased in certain areas, according to many of our sources. Increased competition can lead to potentially unsafe practices, industry experts said. Although we were unable to determine how widespread these activities are, experts cited the potential for such practices, including helicopter shopping and call jumping. Helicopter shopping refers to calling a series of operators until an operator agrees to take a flight assignment, without telling the subsequently called operators why the previously called operators declined the flight. This practice can be unsafe if the operator that accepts the flight assignment is not aware of all of the facts surrounding the assignment. Call jumping occurs when an air ambulance operator responds to a scene without being dispatched to it or when multiple operators are summoned to an accident scene. This situation is potentially dangerous because the aircraft are all operating in the same uncontrolled airspace—often at night or in marginal weather conditions—increasing the risk of a midair collision or other accident. From 1998 through 2008, the air ambulance industry averaged 13 accidents per year, according to NTSB data. The annual number of air ambulance accidents increased from 8 in 1998 to a high of 19 in 2003. Since 2003, the number of accidents has slightly declined, fluctuating between 11 and 15 accidents per year. While the total number of air ambulance accidents peaked in 2003, the number of fatal accidents peaked in 2008, when 9 fatal accidents occurred (see fig. 4). Of 141 accidents that occurred from 1998 to 2008, 48 accidents resulted in the deaths of 128 people. From 1998 through 2007, the air ambulance industry averaged 10 fatalities per year. The number of overall fatalities increased sharply in 2008, however, to 29. Both the spike in the number of fatal accidents in 2008 and the overall number of accidents are a cause for concern. However, given the apparent growth in the industry, the increase in the number of accidents may not indicate that the industry has experienced, on the whole, the industry’s safety record has worsened. More specifically, without actual data on the number of hours flown, no accident rate can be accurately calculated. Because an accurate accident rate is important to a complete understanding of the industry’s safety, we recommended in 2007 that FAA collect data on flight activity, including flight hours. In response, FAA has surveyed all helicopter air ambulance operators to collect flight activity data. However, to date, FAA’s survey response rate is low, raising questions about whether this information can serve as an accurate measure or indicator of flight activity. In the absence of actual flight activity data, others have attempted to estimate flight hours and accident rates for the industry. For example, an Air Medical Physician Association (AMPA) study estimated annual flight hours for the air medical industry through an operator survey, determining that the overall air medical helicopter accident rate has dropped slightly in recent years to approximately 3 accidents per 100,000 flight hours. However, the study’s preliminary estimates for 2008 indicate that the fatal accident rate tripled over the 2007 rate, increasing from 0.54 fatal accidents per 100,000 flight hours in 2007 to 1.8 fatal accidents per 100,000 flight hours in 2008. Data on the causes and factors underlying air ambulance accidents indicate that while the majority of accidents are caused by pilot error, a number of risks, including nighttime operations, adverse weather conditions, and flights to remote sites, also contribute to accidents. NTSB data on helicopter accidents occurring from 1998 through 2008 show that pilot error was deemed the probable cause in more than 70 percent of air ambulance accidents, while factors related to flight environment (such as light, weather, and terrain) contributed to 54 percent of all accidents. Nighttime accidents for air ambulance helicopters were prevalent, and air ambulance accidents tended to be more severe when they occurred at night than during the day. Similarly, air ambulance accidents were often associated with adverse weather conditions (e.g., wind gust and fog). Finally, flying to remote sites may further expose the crew to other risks associated with unfamiliar topography and makeshift landing sites. In 2007, we reported that the air ambulance industry’s response to the higher number of accidents has taken a variety of forms, including research into accident causes and training. Since then, the industry has continued its focus on improving safety by, for example, initiating efforts to develop an industry risk profile and share weather information. In July 2008, for instance, AAMS convened a conference (summit) on safety to encourage open communication between the medical and aviation sectors of the industry. AAMS plans to issue a summary of the summit’s proceedings that will include recommended next steps. Table 1 highlights examples of recent industry initiatives. In 2007, we reported that FAA, the primary federal agency overseeing air ambulance operators, has issued guidance, expanded inspection resources, and collaborated with the industry to reduce the number of air ambulance accidents. Since then, FAA has taken additional steps to improve air ambulance safety including the following: Enhanced oversight to better reflect the unique nature of the industry. FAA has changed its oversight to reflect the varying sizes of operators. Specifically, large operators with 25 or more helicopters dedicated to air medical flights are now assigned to dedicated FAA Certificate Management Teams (CMT)—groups of inspectors that are assigned to one air ambulance operator. These CMTs range in size from 4 inspectors for Keystone Helicopter Corporation, which has a fleet of 38 helicopters, to 24 inspectors for Air Methods, which has a fleet of 322 helicopters. Additionally, CMTs use a data- and risk-based process to target inspections to areas that pose greater safety risk. For operators of all sizes, FAA has asked inspectors to consider using the Surveillance Priority Index tool, which can be used to identify an operator’s most pressing safety hazards. In addition, FAA is hiring more aviation safety inspectors with rotorcraft experience. Provided technical resources. FAA has revised its guidance for the use of night vision goggles (NVG) and established a cadre of NVG national resource inspectors. FAA has also developed technical standards for the manufacture of helicopter terrain awareness and warning systems for air medical helicopters. These standards articulate the minimum performance standards and documentation requirements that the technology must meet to obtain FAA approval. FAA also commissioned the development of an air ambulance weather tool, which provides weather assessments for the community. Launched accident mitigation program. Initiated in January 2009, this program provides guidance for inspectors of air ambulance operators, requiring them to ensure, among other things, that these operators have a process in place to facilitate safe operations, such as a risk assessment program. Revised minimum standards for weather and safe cruise altitudes: To enhance safety, FAA revised its minimal requirements for weather and safe cruise altitudes for helicopter air ambulances in November 2008. Specifically, FAA revised its specifications to require that if a patient is on board for a flight or flight segment and at least one of the flight segments is therefore subject to Part 135 rules, then all of the flight segments must be conducted within the revised weather minimums and above a minimum safe cruise altitude determined in preflight planning. Issued guidance on operational control: To help operators better assess risk, improve the flow of information before and during flights, and increase support for flight operations, FAA issued guidance to help air medical operators develop, implement, and integrate operations control centers and enhance operational control procedures. To date, FAA has opted not to use its rulemaking authority to require certain actions, relying instead on notices and guidance to encourage air ambulance operators to take certain actions. FAA guidance and notices are not mandatory for air ambulance operators and are not subject to enforcement. FAA officials told us that rulemaking is a time-consuming process that can take years to complete, hindering the agency’s ability to quickly respond to emerging issues. By issuing guidance rather than regulations, FAA has been able to quickly respond to concerns about air ambulance safety. However, we previously noted that FAA lacked information on the extent to which air ambulance operators were implementing the agency’s voluntary guidance and on the effect such guidance was having. Consequently, we recommended that FAA collect information on operators’ implementation of the voluntary guidance and evaluate the effectiveness of that guidance. In response, in January 2009, FAA directed safety inspectors to survey the air medical operators they oversee about their adoption of suggested practices, such as implementing risk assessment programs and developing operations control centers. According to the inspectors, most of the 74 operators surveyed said they had adopted these practices. Despite the actions taken by the industry and the federal government, 2008 was the deadliest year on record for the air ambulance industry. As a board member noted at the recent NTSB hearing on air ambulance safety, the recent accident record of the industry is unacceptable. Based on our body of work on aviation safety, including air ambulance safety; a review of the published literature; and interviews with government and industry officials, we have identified several potential strategies for improving air ambulance safety. Each of these strategies has merits and challenges, and we have not analyzed their benefits and costs. But, as the recent accident numbers show, additional efforts are warranted. Obtain complete and accurate data on air ambulance operations: As we reported in 2007, FAA lacks basic industry information, such as the number of flights and flight hours. In response to our prior recommendation that FAA collect flight activity data, FAA surveyed all helicopter air ambulance operators in 2008, but fewer than 40 percent responded, thereby raising questions about the reliability of the information collected. The low response rate also suggests that many operators will not provide this information unless they are required to do so. Until FAA obtains complete and reliable information from all air ambulance operators, it will be unable to gain a complete understanding of the industry and determine whether its efforts to improve industry safety are sufficient and accurately targeted. Increase use of safety technologies: We have previously reported that using appropriate technology and infrastructure can help improve aviation safety. For example, the development and installation of terrain awareness and warning systems on large passenger carriers has almost completely eliminated controlled flights into terrain, particularly for aircraft equipped with this system. When we studied the air ambulance industry in 2006 and 2007, the most frequently cited helicopter-appropriate technology was night vision goggles. Additional safety technology has been developed or is in development that will help aircraft avoid cables and enhance terrain awareness for pilots, among other things. However, testimony submitted by industry stakeholders at NTSB’s February 2009 hearing on air ambulance safety indicated that the implementation of such technology has been slow. NTSB previously recommended that FAA require terrain awareness and warning systems on air ambulances. Proposed legislation (H.R. 1201) would also require FAA to complete a study within one year of the date of enactment on the feasibility of requiring flight data and cockpit voice recorders on new and existing air ambulances. Sustain recent efforts to improve air ambulance safety: Our past aviation safety work and anecdotal information on air ambulance accident trends suggest that the industry and federal government must sustain recent efforts to improve air ambulance safety. In 1988, after the number of accidents increased in the mid-1980s, NTSB published a study that examined air ambulance safety issues. The study contained 19 safety recommendations to FAA and others. FAA took action, including implementing the NTSB recommendations, and the number of ambulance accidents declined in the years that immediately followed. However, as time passed, the number of accidents started to increase, peaking in 2003. This again triggered a flurry of government and industry actions. Similarly, FAA took steps to address runway incursions and overruns after the number and rate of incursions peaked in fiscal year 2001, but FAA’s efforts later waned, and the number and rate of incursions and overruns remained steady. Fully Address NTSB recommendations: In 2006, NTSB published a special report focusing on the air ambulance industry, which included four recommendations to FAA to improve air ambulance safety. Specifically, NTSB called for FAA to (1) require that all flights with medical personnel on board be conducted in accordance with Part 135 regulations, (2) develop and implement flight risk evaluation programs, (3) require formalized dispatch and flight-following procedures, and (4) require terrain awareness and warning systems on aircraft. As of January 2009, FAA had sufficiently addressed only the recommendation to require formalized dispatch and flight-following procedures, according to NTSB. However, NTSB’s February 2009 air ambulance hearing highlighted the status of the NTSB recommendations, and major industry associations have said they agree in principle with the recommendations, but would like to work with FAA and NTSB to adapt the recommendations to the industry’s circumstances and gain more flexibility. Proposed legislation (H.R. 1201) also would require most of the safety enhancements NTSB recommended. Adopt safety management systems within the air ambulance industry: Air operators rely on a number of protocols to help reduce the potential for poor or erroneous judgment, but evidence suggests that these protocols may be inconsistently implemented or followed in air ambulance operations. According to an FAA report on air ambulance accidents from 1998 through 2004, a lack of operational control (authority over initiating, conducting, and terminating a flight) and poor aeronautical decision making were significant factors contributing to these accidents. To combat such issues, FAA has been encouraging air ambulance operators to move toward adopting safety management systems, providing guidance, developing a generic flight risk assessment tool for operators, and requiring inspectors to promote the adoption of safety best practices. Clarify the role of states in overseeing air ambulance services: Air ambulance industry stakeholders disagree on the role that states should play in overseeing broader aspects of air medical operations. In particular, some industry stakeholders have advocated a greater role for states in regulating air ambulance services as part of their public health function. Other industry stakeholders, however, oppose increased state oversight, noting, for example, that the Airline Deregulation Act explicitly prohibits states from regulating the price, route, or service of an air carrier. This legislation generally limits oversight at the state or local levels to the medical care and equipment provided by air ambulance services, although the extent of this oversight varies by state. Proposed legislation (H.R. 978) would recognize and clarify the authority of the states to regulate intrastate air ambulance services in accordance with their authority over public health. Determine the appropriate use of air ambulance services: According to a May 2007 article by two physicians, multiple organizations are concerned that air ambulance services are overused and misused. The study further notes concerns that decisions about where to transport a patient may be influenced by nonmedical reasons, such as insurance coverage or agreements with hospitals. Another industry expert has posited that excessive use of air ambulances may be unsafe and not beneficial for most patients, citing recent studies that conclude few air transport patients benefited significantly over patients transported by ground and noting the recent increase in the number of air medical accidents. Other studies, however, have disagreed with this position, citing reductions in mortality achieved by using air ambulances to quickly transport critically injured patients. We provided a draft copy of this testimony to FAA for review and comment. FAA provided technical clarifications, which we incorporated as appropriate. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to questions from you or other Members of the Subcommittee. For further information on this statement, please contact Dr. Gerald L. Dillingham at (202) 512-2834 or [email protected]. Contact points for our Congressional Relations and Public Affairs offices may be found on the last page of this statement. Individuals making key contributions to this testimony were Nikki Clowers, Assistant Director; Vashun Cole, Elizabeth Eisenstadt, Brooke Leary, and Pamela Vines. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Air ambulance transport is widely regarded as improving the chances of survival for trauma victims and other critical patients. However, recent increases in the number of air ambulance accidents have led to greater industry scrutiny by government agencies, the public, the media, and the industry itself. The National Transportation Safety Board (NTSB) and others have called on the Federal Aviation Administration (FAA), which provides safety oversight, to issue more stringent safety requirements for the industry. This testimony discusses (1) recent trends in the air ambulance industry with regard to its size, composition, and safety record; (2) recent industry and government efforts to improve air ambulance safety; and (3) potential strategies for improving air ambulance safety. This testimony is based primarily on GAO's February 2007 study on air ambulance safety (GAO-07-353). To update and supplement this 2007 report, GAO analyzed the latest safety information from NTSB and FAA, reviewed published literature on the state of the air ambulance industry, and interviewed FAA officials and industry representatives. GAO provided a copy of the draft testimony statement to FAA. FAA provided technical comments, which GAO incorporated as appropriate. The air ambulance industry has increased in size, and concerns about its safety have grown in recent years. Available data suggest that the industry grew, most notably in the number of stand alone (independent or community-based) as opposed to hospital-based operators, and competition increased among operators, from 2003 through 2008. During this period, the number of air ambulance accidents remained at historical levels, fluctuating between 11 and 15 accidents per year, and in 2008, the number of fatal accidents peaked at 9. This accident record is cause for concern. However, a lack of reliable data on flight hours precludes calculation of the industry accident rate--a critical piece of information in determining whether the increased number of accidents reflects industry growth or a declining safety record. The air ambulance industry and FAA have acted to address accident trends and causes. For example, FAA enhanced its oversight to reflect the varying sizes of operators, provided technical resources to the industry, launched an accident mitigation program, and revised the minimum standards for weather and safe cruising altitudes that apply to air ambulance operations. Despite the actions to improve air ambulance safety, 2008 was the deadliest year on record for the industry. Through its work on aviation safety, including air ambulance safety; review of the published literature; and interviews with government and industry officials, GAO has identified several potential strategies for improving air ambulance safety, including the following: (1) Obtain complete and accurate data on air ambulance operations. (2) Increase the use of safety technologies. (3) Sustain recent efforts to improve air ambulance safety. (4) Fully address NTSB's recommendations. (5) Adopt safety management systems within the air ambulance industry. (6) Clarify the role of states in overseeing air medical services. (7) Determine the appropriate use of air ambulance services. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
SSA provides assistance to people who qualify as disabled under two programs: (1) Disability Insurance (DI), which provides benefits to people who have worked and paid Social Security payroll taxes, and (2) Supplemental Security Income (SSI), which is an assistance program for people with limited income and resources who are blind, aged, or disabled. Currently, the disability determination process starts when a person first applies for DI or SSI disability benefits. To apply for benefits, he or she calls the national toll-free telephone number and is referred to a local SSA field office or visits or calls one of 1,300 local field offices. Claims representatives in field offices assist with the completion of claims, obtain detailed medical and vocational history, and screen nonmedical eligibility factors. Field office staff forward the claim to a DDS. At the DDS, medical evidence is developed by a disability examiner and a medical consultant; a final determination is made as to the existence of a medically determinable disability. The DDSs then send allowed claims to SSA field offices or SSA processing centers for payment and storage. Files for denied cases are retained in field offices, pending possible appeal. According to SSA, in part because of the numerous handoffs among staff involved in processing a disability claim, a claimant can wait, on average, between 78 and 94 days from the time of filing with SSA until receiving an initial claim decision notice—when in fact only 13 hours is actually spent working on the claim. In 1994, SSA released its redesign plan for receiving and deciding disability claims. The plan aims to improve the current process, which is labor intensive and slow, so as to increase claimant and staff satisfaction. To develop the plan, SSA created a Disability Process Reengineering Team, charged with producing a new process that is customer-focused, operationally feasible, and an improvement over the current process. A Disability Process Redesign Team (DPRT) was later formed to implement the Reengineering Team’s plan. In developing its redesign plan, Reengineering Team members solicited views from customer focus groups, frontline staff, managers and executives, and parties outside of SSA. The Reengineering Team found that claimants were frustrated with the fragmented nature of the current process and wanted more personalized service. In addition, some SSA staff were frustrated because they were not trained to answer claimants’ questions about medical disability decisions or about the status of cases while in DDS offices. To address these concerns, SSA created the DCM position as the cornerstone of its redesign plan. Under SSA’s redesign plan, the DCM—a single decisionmaker located at either an SSA or a DDS office—would be solely responsible for processing the initial disability claim and making the decision, thereby assuming functions currently performed by at least three federal and state workers. The DCM would conduct personal interviews, which could be face-to-face, by telephone, or by video conference; develop evidentiary records; and determine medical and nonmedical eligibility. Specifically, the DCM would gather and store claim information; develop both medical and nonmedical evidence; share necessary facts in a claim with medical consultants and specialists in nonmedical or technical issues; analyze evidence; prepare well-reasoned decisions on both medical and nonmedical issues; and produce clear, understandable notices to convey information to claimants. In addition, the DCM would authorize payment of the claim. Although DCMs would still have access to medical and technical support personnel, they alone would make the final decision on both medical and nonmedical aspects of a disability claim. A medical consultant’s signature would no longer be required on decisions. The DCM would also serve as a single, personal point of contact for claimants. When filing claims, claimants could first speak in person with a DCM to obtain information about the process. In addition, a claimant would be entitled to contact the DCM throughout the process and meet personally with the DCM to provide additional evidence if the DCM expected to deny a claim. See appendix II for a comparison of the tasks currently assigned to claims representatives and disability examiners with those expected of the DCM. Recognizing the complexity of the DCM position responsibilities, the redesign plan calls for implementing several new support features that SSA considers critical to the DCM position: (1) SSA plans to develop a simplified decision methodology that would provide a less complex, more structured approach for DCMs to use when deciding claims. (2) New hardware and software would automate most aspects of the process and allow SSA to move from a process that depends on paper folders to one that depends on electronic records. These records would be easy to transmit between headquarters, field offices, and state DDSs. (3) In order to address the perception that different policy standards are applied at different levels of disability decision-making, SSA intends to develop a process that generates similar decisions for similar cases at all stages of the disability process through consistent application of laws, regulations, and rulings. SSA refers to this feature as process unification. Without these new features, SSA managers do not expect that DCMs would be able to handle the broad range of activities that the position requires. However, as of July 1996, none of these support features were available. During the next few years, SSA expects to test the DCM position and several DCM-related initiatives. Some of the related initiatives, which SSA believes will immediately improve customer service, are being tested because SSA initially thought that the DCM position could not be immediately implemented. Other tests, which had been planned prior to redesign, are designed to provide information on various functions now incorporated into the DCM position. These tests are described below. Appendix III provides information on their status. SSA’s initial 1994 redesign plan called for testing and implementing alternative ways of serving claimants, based on teams of claims representatives and disability examiners. Currently, a disability claim is handled primarily by two staff members (the claims representative and the disability examiner), each working independently of the other, with minimal coordination. As part of the redesign plan, SSA expects to team its claims representatives and DDS disability examiners so they can process claims in a coordinated manner. SSA also expects that this team environment would allow claims representatives and disability examiners to share skills and enhance communication, thus better preparing them for the transition to the DCM position. Following this initial teaming of claims representatives and disability examiners, SSA plans to build on teaming by implementing the Early Decision List and sequential interviewing initiatives. SSA envisions that the Early Decision List and sequential interviewing would provide claims representatives and disability examiners with opportunities to (1) expedite the processing of disability claims by streamlining the interview process and (2) expand the claims representatives’ skills and experience in the medical area and that of the disability examiners in the nonmedical area. The Early Decision List identifies severe disabilities that can be adjudicated by claims representatives with minimal training and documentation. The Early Decision List will allow a claims representative to approve certain types of claims. After approving a claim, the claims representative would forward the case to a medical consultant for final approval. Currently, only the disability examiner and the medical consultant approve these claims. SSA expects that initially, about 100,000 claims per year might be approved under the Early Decision List. Eventually, the number of Early Decision List cases will expand as claims representatives’ skills and knowledge base increase. This expansion will result from (1) phasing in additional categories of disabilities and (2) the option for claims representatives to issue denials. The sequential interviewing initiative is designed to provide disability examiners with preliminary interviewing experience for certain categories of disability claims. Additional categories will be phased in over time as the examiners’ experience increases. Under sequential interviewing, after the claims representative completes the nonmedical portion of the claim, he or she will turn the claimant over to the disability examiner, who will complete the medical portion of the application. The disability examiner will either talk with the claimant by telephone before he or she leaves the field office or talk by telephone at a later date. According to SSA’s plan, the Early Decision List and sequential interviewing are modeled on existing teaming initiatives in field offices and state DDSs. For example, some offices have already experimented with sequential interviewing; in other offices, SSA claims representatives already assist DDSs by making medical determinations for some categories of severe disabilities. Preliminary results from these local initiatives indicate that they can improve customer service, work flow, and job satisfaction. For example, one field office that used sequential interviewing processed initial claims in 46 days, well below the current average of between 78 and 94 days. Customer surveys indicate that claimants served in these efforts were pleased with the sequential interviewing. In addition, claims representatives and disability examiners participating in these initiatives were satisfied with the team tests, they said. Currently, SSA expects to conduct formal testing and evaluation of the Early Decision List, but it will rely on states to test sequential interviewing. SSA also expects to make available its Office of Workforce Analysis and Office of Program and Integrity Reviews to provide test assistance to states. According to the DPRT director, SSA made this decision because (1) of resource constraints and (2) sequential interviewing is viewed as only a temporary measure, which will lead to the DCM position. However, the director acknowledged that formal testing of sequential interviewing would be necessary to allow for a comparison of this initiative with the proposed DCM position. In addition to sequential interviewing and Early Decision List initiatives, SSA expects to test modifications to the disability determination process at model sites in federal offices and state DDSs. One model site test—the single medical decisionmaker—exemplifies the concept of the disability examiner making eligibility decisions alone, except in cases for which medical consultant involvement is required by statute. SSA considers this test useful because it analyzes the aspects of the redesign plan that have DCMs making eligibility decisions without necessarily soliciting medical consultants’ input for all cases. In this test, a disability examiner will be authorized to make medical eligibility decisions without obtaining a medical consultant’s signature, on the SSA form, certifying the determination. In other model site tests, scheduled for completion in late 1998, SSA will expand the single medical decisionmaker test to evaluate other aspects of the disability process. In the expanded test, SSA will consider the effect of allowing claimants to have a personal predecision interview with the decisionmaker, in order to provide additional evidence if a denial is imminent. This is an opportunity not available under the existing system. As of June 1996, SSA was testing the single medical decisionmaker at DDSs in eight states and was developing the expanded test for implementation in seven states and two SSA offices. In its original redesign plan, SSA intended to test the DCM position only after testing was under way on the Early Decision List, sequential interviewing, and initiatives being explored at the model sites. SSA also intended that critical support features—including a structured approach for deciding claims, new hardware and software, and a process that ensures similar decisions for similar cases at all stages of the disability process—would be in place before the DCM could be implemented. However, in October 1995, SSA decided to initiate DCM testing in 1996, even though SSA had not yet (1) implemented these other initiatives or (2) developed any of the support features that had been included in the redesign plan as critical to the position. According to the DPRT director, SSA management accelerated DCM testing to address several factors that might impede the overall redesign plan. For example, the DPRT director became concerned that delaying DCM testing until critical support features were in place would slow the momentum for the redesign plan, particularly because delays were already occurring in SSA’s original schedule to implement these features. SSA also wanted to gain endorsement from its federal employee union, which originally was concerned about the DCM position. The DPRT director further cited state DDS directors’ concerns—about providing disability examiners with little opportunity to gain nonmedical case development experience—as a factor influencing his decision to begin testing the DCM position. According to the DPRT director, the tests will provide states with additional time to become accustomed to the DCM concept and with the opportunity to address concerns about the position. However, state DDS directors’ representatives said, DPRT misunderstood their concerns. DDS directors oppose SSA’s plan to accelerate implementation of the DCM position without the necessary critical support features and are concerned that SSA is beginning to give a workload to federal employees that is currently states’ responsibility. According to the president of the American Federation of Government Employees, Local 1923, the union would have opposed the DCM position if SSA attempted to implement it as a grade 11. Under a memorandum of understanding between the union and SSA, people who are assigned to DCM positions will receive temporary promotions to grade 12, one grade higher than the journeyman level for the claims representative position. According to the Deputy Commissioner for Human Resources, if SSA decides to make the DCM position permanent, an evaluation will be required to determine the appropriate salary level for the job. To develop parameters for conducting and evaluating the DCM test, SSA assembled a work group consisting of representatives from SSA and DDS management, claims representatives and disability examiners, and federal and state union members. Throughout redesign, SSA has relied on such work groups to formulate plans for the individual redesign components. In July 1996, the work group released its final proposal for testing the DCM position. Agreement to the proposal, developed by this work group, must be obtained from the states, unions, and SSA management. The work group’s report recommends that SSA (1) conduct the DCM test in three phases, over a 3-year period, and (2) decide, at the end of each phase, how to proceed with the balance of the test. During the first phase, scheduled to last for 18 months, SSA would test 150 federal and 150 state DCM positions. At the end of this phase, SSA would evaluate the results to determine whether it should continue, modify, or terminate the DCM test. For the second phase, if SSA decides to continue the test, it would then introduce an additional 200 federal and 200 state DCMs. After this phase, SSA would again evaluate the results to determine whether the agency should continue, modify, or terminate the test. If SSA decides to proceed with the third phase, it would then establish an additional 400 federal and 400 state DCMs. At the end of this third and final phase, SSA would conduct a comprehensive review of the entire DCM test in order to decide whether it should implement the DCM position permanently. However, the testing proposed by the DCM work group may leave untested an important feature of the position. During the initial test of the position, the claimant may not be given an opportunity to meet personally, face-to-face, with the DCM in a predecision interview. At this time, the claimant could provide additional evidence if the DCM expects to deny the claim. The predecision interview is a key factor of the DCM position, one that (1) could easily be tested without waiting for the critical support features and (2) many claims representatives and disability examiners would prefer not to do. Further, even though DDS representatives were work group participants, they did not support SSA’s proposal to test 1,500 DCM positions. At the conclusion of the DCM work group’s activities, the National Council of Disability Determination Directors presented a position paper to the DPRT director, stating that they would only agree to a test involving 60 state and 60 federal DCMs. Concerns have been raised about the DCM position since the DPRT first proposed it in 1994. These concerns include the complexity of the responsibilities, compromises to safety and internal controls, salary differential between federal and state employees, and structure of field operations. SSA and state DDS managers and staff, as well as employee groups and union representatives, are concerned about one person’s ability to master the complex responsibilities expected of a DCM. The DCM will combine major segments of two positions—claims representative and disability examiner—and will also include responsibilities now assigned to medical consultants. As SSA’s key staff providing public service, claims representatives carry out a wide range of complex tasks in the disability program. When processing an initial disability claim, a claims representative, through interviews, obtains and clarifies information from a disability claimant. The claims representative assists claimants with securing necessary additional evidence. Ultimately, the representative (1) determines whether claimants meet nonmedical requirements for benefits, using a series of administrative publications, including SSA’s Program Operations Manual System that interprets federal laws and regulations, (2) calculates benefit amounts, and (3) authorizes payments for allowed claims. Because of voluminous, detailed, and complicated program guidelines, some claims representatives specialize in processing claims for a specific SSA program, such as SSI. State DDS disability examiners also perform a wide range of complex tasks to determine whether a claimant’s disability meets SSA’s medical criteria for benefits eligibility. The disability examiner reviews claims forwarded by SSA field offices, obtaining additional medical records and vocational documentation on claimants as necessary. In making a medical determination, a disability examiner must establish the date of onset, duration, and level of severity of the disability; the prognosis for improvement; and the effect of the disability on a claimant’s ability to engage in gainful employment. As with guidelines for claims representatives, the complicated disability program guidelines lead some disability examiners to specialize in processing either child or adult claims. The complexity of disability examiners’ and claims representatives’ responsibilities is evidenced by the training required for the positions. Newly hired SSA claims representatives typically take 13 weeks of classroom training, followed by on-the-job training and mentoring. They reach journeyman level after a minimum of 2 years on the job. Similarly, the state DDS examiners go through a formal 2-year training program that includes classroom training and close individual supervision and guidance from unit supervisors; only then are examiners able to make medical eligibility determinations independently. According to some SSA and DDS managers and employees, the DCM position may stretch staff to the point that they cannot competently manage all the required tasks. For example, in one state that we visited, a local demonstration project has claims representatives approving disability decisions for some categories of claims—those for which the disability is easily determined. According to quality assurance staff reviewing these decisions, claims representatives are beginning to make errors on nonmedical portions of claims, possibly because these representatives are branching out into areas beyond their knowledge and experience. Although the DPRT director agreed that the responsibilities of the DCM position are complex, he stated that SSA designed it in response to claimants’ concerns that the existing process did not meet their needs. The new position is intended to (1) simplify the application process for claimants by allowing them personal contact with decisionmakers and (2) provide for more rapid decisions on claims. In addition, he stated that the DCM test will permit SSA to assess the feasibility of the DCM position. According to some federal and state staff and managers, the DCM position has the potential to compromise internal controls and safety of staff, issues that are currently not a problem because responsibilities are split between state and federal staff. These staff and managers are concerned about the safety of DCMs when they conduct face-to-face interviews with claimants. They are also concerned that the DCM position could compromise existing internal controls on the disability program. SSA’s redesign plan provides an opportunity for claimants to speak face-to-face with the DCMs who make decisions on their cases. Currently, claimants rarely meet face-to-face with disability examiners, who are primarily responsible for making the disability decision. As a matter of practice, claimants have personal interviews—by telephone or face-to-face—with field office claims representatives, who are frequently not trained to answer claimants’ questions about medical disability decisions. According to claims representatives and disability examiners, because of past incidents of claimant violence and the fact that some claimants have a history of mental illness, they are worried that claimants could become violent with DCMs who notify them, face-to-face, that their claims will be denied unless they can provide additional information as support. In addition, state staff said, some disability examiners chose their profession partly because it did not involve face-to-face interviews with claimants. Consequently, claims representatives and disability examiners may be reluctant to become DCMs because of such safety and job preference concerns. SSA’s plan to provide claimants an opportunity to meet face-to-face with decisionmakers differs from the approach used by many private companies that provide disability and workers compensation insurance. In these organizations, face-to-face interviews are generally used only under specific conditions, such as to investigate potential fraud or to help facilitate rehabilitation. According to officials from various private companies, direct personal contact with claimants generally is not economically viable because such meetings take a considerable amount of time. Further, these officials said, face-to-face meetings provide little additional information besides that which can be obtained by phone and mail and that they often create stress for staff who deny claimants’ benefits. Further, under the existing system, different groups of federal and state staff—including claims representatives, disability examiners, and claims authorizers—are responsible for making eligibility decisions, medical determinations, and claim payment authorizations. This division of responsibilities helps meet standards for internal controls in the federal government. These standards require that key duties and responsibilities in authorizing, processing, recording, and reviewing transactions be separated among staff. Such standards help to reduce the risk of error, waste, or wrongful acts because each staff member carries out his or her tasks for specific transactions; he or she is independent from the other staff members involved in processing the same transaction. Under the SSA redesign plan, however, the DCM—a single decisionmaker—would be responsible for making medical and nonmedical eligibility decisions and for authorizing benefit payments for each disability claim. By assigning all these responsibilities to one decisionmaker, SSA is increasing the potential for staff fraud, as other staff will not be processing the different parts of the claim. According to SSA, the DPRT has not yet developed a way to address this concern. However, according to the deputy associate commissioner for Office Financial Policy and Operations, SSA will address these issues as the redesign plan is implemented. State DDS representatives are concerned about SSA’s agreement with labor union officials to compensate federal DCMs, during the test, at a higher salary level than claims representatives. Their concern is that the agreement will exacerbate the salary differential between state and federal staff. According to Wisconsin DDS calculations, federal claims representatives now earn about $7,863 more on average in annual salary and benefits ($49,607) than state disability examiners ($41,744). However, disability examiners and claims representatives currently have different job responsibilities, which partially explains the salary differential. If SSA promotes grade 11 claims representatives to grade 12 DCMs, the differential between federal and state DCMs will ultimately widen to over $17,714. Federal DCMs will earn about $59,458 in salary and benefits, but state DCMs are not expected to receive a similar position upgrade. This differential would be more problematic than the current one because federal and state DCMs would be doing identical jobs. According to DDS directors, the salary differential between federal and state DCMs could cause serious morale problems among staff. According to the DPRT director, the salary differential between federal and state DCMs will continue to exist. However, the director said, states should use the DCM test as an opportunity to take position descriptions to their civil service boards to see if the positions can be upgraded. The director plans to work with state DDSs to facilitate this upgrade. However, according to the president of the National Council of Disability Determination Directors, many states will be unable to upgrade DDS employees because disability examiner positions are frequently classified with other unrelated positions and can not be upgraded without affecting states’ overall pay structures. The DCM position may require SSA and the state DDSs to restructure their field operations. Currently, SSA has about 1,300 field offices at which claimants can file their initial claims. The 54 DDSs have different types of field structures: 38 are centralized, with staff located in one office; the remaining 16 are decentralized, with staff in more than one office. However, in a given state, even decentralized DDSs have fewer field offices than SSA has. Since both state and federal offices will be handling claimants’ initial claims after redesign, SSA and DDSs may need to consider changing their current field operations to avoid overlapping areas of service within the same metropolitan area. States with DDS staff in one area, however, would need to relocate some of them or open new offices that are convenient to claimants throughout their states. Finally, because medical consultants are generally only located in DDSs, SSA will need to consider how to provide federal and state DCMs with access to medical consultants. Although the DCM work group recognized these concerns, it did not propose ways to deal with them in the upcoming accelerated DCM tests. According to the DPRT director, SSA has not yet addressed and resolved these concerns. SSA expects to recruit the approximately 11,000 DCMs, which it estimates will be needed, from its current staff of federal claims representatives and state disability examiners. However, some of these staff may be unwilling or lack the necessary skills to assume DCM responsibilities. In addition, SSA has not yet developed plans for providing technical and clerical support staff for the DCM position. SSA management estimates that it will need about 11,000 DCMs to process disability claims. SSA expects to recruit DCMs from its current staff of about 16,000 claims representatives and about 6,000 disability examiners. Although some claims representatives may process either retirement and survivor or disability claims, disability examiners only work on disability claims. According to DPRT team members, federal claims representatives who lack the interest or skills necessary to become DCMs will be able to continue processing retirement and survivor claims. In contrast, it is unclear what employment options will be available for state disability examiners who do not want to become DCMs since DCMs will make all disability decisions. Although SSA plans to recruit DCMs from the current ranks of claims representatives and disability examiners, SSA management will face various challenges doing so. Many SSA and DDS field office managers and staff, whom we interviewed, were skeptical about whether enough claims representatives and disability examiners would have the necessary skills to assume the additional responsibilities expected of DCMs. Claims representatives and disability examiners will need extensive training to learn each others’ job requirements. Further, disability examiners in California, Florida, North Carolina, and Wisconsin would prefer not to have direct contact with claimants because of the pressure of face-to-face interviews, they said. Currently, disability examiners generally make disability decisions based on a review of documents without face-to-face contact with the claimant. Some disability examiners also indicated that they were unwilling to become DCMs because they were not interested in performing the nonmedical tasks involved in processing a claim. According to the DPRT director, concerns about staff availability and the stress associated with the DCM position are valid. However, he stated, the potential for stress is not a reason for SSA to abandon the DCM position. In his opinion, SSA cannot focus solely on its staff and ignore its customers’ demands for improved service; further, the DCM test would consider the effect of stress and ways to alleviate it. However, during the first phase of the upcoming test, as proposed by the DCM work group, SSA would not test the face-to-face predecision interview, one of the major points of potential stress for staff filling the new position. SSA recognizes that DCMs will need the assistance of technical and clerical support staff to allow DCMs to perform their duties. Although DCMs will be responsible for handling most aspects of disability claims, SSA’s redesign plan calls for DCMs to “work in a team environment with internal medical and nonmedical experts...as well as technical and other clerical personnel....” For example, DCMs may need clerical help to assist in performing labor-intensive tasks associated with the processing of disability claims, such as processing mail and screening telephone calls. DCMs may also need access to medical and technical support personnel. Although no longer required on all cases, DCMs may need to obtain the opinion of medical consultants for certain cases. Similarly, DCMs may also need to call on technical support staff for assistance with claimant contacts, status reports, development of nondisability issues, and payment authorization. In November 1995, an initial report, from the DPRT work group on the DCM position, recommended that SSA create a new DCM assistant position to provide various types of support to DCMs. The work group recommended that SSA create one DCM assistant position for every two DCMs. Although SSA management did not agree to create this new position, management did agree to use existing personnel to staff DCM model test sites with appropriate technical and clerical support. However, this may be difficult for SSA because many of its field offices presently have few or no clerical staff. Even though the critical support features required for the DCM are unavailable, SSA’s decision to test the DCM position provides an opportunity to gather information about the position’s feasibility, efficiency, and effectiveness. Thorough data gathering and analysis will provide SSA with some of the key information it needs to determine whether the DCM position is the best way to serve the claimant population and protect the public trust. The DCM work group’s proposal—calling for evaluating the activity of the first group of DCMs 18 months into the test and using the evaluation results to make a decision on whether to proceed with additional testing, modify the DCM position, or cancel the position entirely—is sound. However, there are some limitations on what SSA can actually test relative to the DCM position at this time. Because the critical support features are not ready for testing, the test will not provide a complete picture of the DCM position’s feasibility, nor will it allow SSA to assess the relative costs and benefits of implementing the position. SSA will also not be able to assess the effects that improvements, such as technological enhancements and a simplified decision methodology, will bring to the overall disability claims process. The DCM work group’s consideration of delaying the predecision interview may also limit the value of the test. As SSA attempts to make a sound decision about further DCM testing or implementation of the DCM position, SSA would benefit from systematically assessing the results from all its DCM-related initiatives—the DCM tests, the model site tests, the Early Decision List, and sequential interviewing—and comparing their relative effects on SSA’s workforce, work flow, operating costs, and service to claimants. SSA may find that the results of some of these initiatives (1) increase decision-making efficiency and satisfy claimants more effectively than the DCM position or (2) may suggest better ways to satisfy claimant needs and reduce processing time. To facilitate the evaluation of all these initiatives, SSA needs to ensure that it has comparable test results for each of them. We recommend that the Commissioner of the Social Security Administration assess current efforts to test the DCM position, so as to ensure that SSA is provided with the best possible information for making future decisions about the position. Specifically, the Commissioner should include, in the test of the DCM position, a personal predecision interview that provides an opportunity for claimants to meet with the DCM in person, by video conference, or by telephone, and continue testing of sequential interviewing, Early Decision List, and model site initiatives throughout the DCM test. Testing and subsequent evaluations should document the extent to which the DCM position and the other initiatives increase service to the public and decrease processing time. At the end of the initial 18-month testing period and, if appropriate, at subsequent decision points, SSA should compare the evaluation results of the DCM and other initiatives with respect to their relative benefits and costs. SSA should consider these results before deciding to increase the number of DCM test positions and before approving the DCM position permanently. In its comments on this report, SSA generally agreed that we have identified the issues and concerns raised by the establishment of the new disability claims manager position. SSA also stated that it will make or has already made the changes we recommended to ensure the availability of the information necessary to assess the DCM position. Finally, SSA also stated that it plans to use results from other DCM-related initiatives to document the extent to which service to the public is improved and processing time is reduced. We believe SSA’s planned actions would be more effective if SSA included a predecision interview in its DCM test. We also believe that SSA should ensure that states’ evaluation of sequential interviewing initiatives can be compared with the results of the DCM and other related initiatives. SSA made a number of technical comments, which we incorporated as appropriate. The full text of SSA’s comments and our responses are included in appendix IV. We are providing copies of this report to the Director of the Office of Management and Budget and the SSA Commissioner. We will also make copies available to others upon request. Major contributors to this report are listed in appendix V. If you have any questions concerning this report or need additional information, please call me on (202) 512-7215. To determine how SSA planned to test and implement the DCM position, we interviewed and reviewed documents from key members of the Redesign Team at SSA’s headquarters in Baltimore, Maryland. We also conducted site visits in California, Florida, Georgia, North Carolina, and Wisconsin, where we (1) interviewed staff and managers of SSA field offices and state DDSs and (2) analyzed documents they provided. We judgmentally selected these locations because local SSA field offices and DDSs in these states have already experimented with a teaming initiative, so as to facilitate closer interaction between SSA claims representatives and DDS disability examiners. Although these initiatives were not part of SSA’s redesign plan, we believe the results provide some insight on how SSA could implement the DCM position. To identify the concerns associated with the DCM position, we spoke with the following during our site visits: DPRT members, SSA regional and field office managers and staff, employee union representatives, and DDS managers and staff. We also reviewed documents they provided us, which summarized their views on the DCM position. To determine whether SSA had ensured that it had an adequate staff to implement the DCM position, we interviewed and analyzed information from DPRT members, SSA field office managers and staff, and state DDS officials and staff. To identify how organizations with employee classifications similar to the DCM process claims, we also interviewed representatives from four private insurers, two affiliated trade associations, and a public utility. The following are GAO’s comments on the Social Security Administration’s letter dated August 16, 1996. 1. We modified our recommendation to reflect the different ways that a DCM could conduct a predecision interview with a claimant: face-to-face, by video conferencing, or by telephone contact. 2. We continue to believe that SSA should incorporate the predecision interview into the DCM test, beginning with the initial 18-month phase, to make the test as comprehensive as possible. Incorporating the predecision interview into the DCM test would provide SSA with valuable information for making future decisions about the feasibility of the DCM position and whether testing should continue beyond the first phase. In particular, testing the predecision interview could provide information about the effect of face-to-face interviews on office security, a main area of concern raised about the DCM position. SSA should not wait for the predecision interview to be tested as part of the expanded model site test. Results from this test are not expected until late in 1998 and may not be available in time for SSA to consider when it makes its decision about further testing or implementation of the DCM position. 3. We support SSA’s decision to provide an opportunity for the claimant to readily and easily contact DCMs participating in the test. Since SSA had already decided that claimants would have this access to the DCM, we modified one of the recommendations in the report. 4. We continue to be concerned that SSA may not have all the test results it needs to decide whether the DCM position should be fully adopted. SSA needs to ensure that states’ evaluation of sequential interviewing initiatives can be compared with the results from the initiatives that SSA is conducting and analyzing itself. We believe SSA’s test of the DCM position, combined with results of other related tests, should provide the basis for its decision on whether or not to implement the position. In addition to those named above, David G. Artadi coauthored the report and contributed significantly to all data-gathering and analysis efforts. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO assessed the Social Security Administration's (SSA) establishment of the disability claim manager (DCM) position, focusing on: (1) SSA efforts to test and implement the position; (2) major concerns about the position; and (3) SSA efforts to staff the position. GAO found that: (1) as envisioned by SSA, DCMs would be solely responsible for processing and approving initial disability claims, assume functions currently performed by at least three federal and state workers, and serve as a single, personal point of contact for claimants; (2) SSA has several initiatives under way to team claims representatives and disability examiners so that they can coordinate claims processing functions and prepare for transition to the DCM position; (3) although it has not yet implemented other initiatives and support features that are critical to the DCM position, SSA has decided to proceed with plans to test the DCM position; (4) a three-phase testing plan proposed by an SSA work group of management representatives, claims representatives, disability examiners, and federal and state union members may leave some important DCM features untested and does not have the support of all work group members; (5) concerns raised about the DCM position include the complexity of DCM responsibilities, compromises to safety and controls, salary differential between federal and state workers, and impact on field operations; and (6) SSA expects to recruit DCMs from its current staff of federal claims examiners and state disability examiners, but some staff may be unwilling or lack the necessary skills, and SSA has not developed a plan for providing technical and clerical support for DCMs. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The National Wildlife Refuge System comprises the only federal lands managed primarily for the benefit of wildlife. The refuge system consists primarily of National Wildlife Refuges (NWR) and Waterfowl Production Areas and Coordination Areas. The first national wildlife refuge, Florida’s Pelican Island, was established by President Roosevelt in 1903 to protect the dwindling population of wading birds in Florida. As of July 1994, the system included 499 refuges in all 50 states and several U.S. territories and accounted for over 91 million acres. (See fig. 1.) The Fish and Wildlife Services’ (FWS) Division of Refuges provides overall direction for the management and operation of the National Wildlife Refuge System. Day-to-day refuge activities are the responsibility of the managers of the individual refuges. Because the refuges have been created under many different authorities, such as the Endangered Species Act (ESA) and the Migratory Bird Conservation Act, and by administrative orders, not all refuges have the same specific purpose or can be managed in the same way. The ESA was enacted in 1973 to protect plants and animals whose survival is in jeopardy. The ESA’s goal is to restore listed species so that they can live in self-sustaining populations without the act’s protection. As of April 1994, according to FWS, 888 domestic species have been listed as endangered (in danger of extinction) or threatened (likely to become endangered in the foreseeable future). The ESA directs FWS to emphasize the protection of listed species in its acquisition of refuge lands and in its operation of all refuges. Under the ESA, the protection, recovery, and enhancement of listed species are to receive priority consideration in the management of the refuges. FWS’ Division of Endangered Species provides overall guidance in the implementation of the ESA. FWS’ regions are generally responsible for implementing the act. Among other things, the act requires FWS to develop and implement recovery plans for all listed species, unless such a plan would not benefit the species. Recovery plans identify the problems threatening the species and the actions necessary to reverse the decline of a species and ensure its long-term survival. Recovery plans serve as blueprints for private, federal, and state interagency cooperation in taking recovery actions. Of all the listed species, 215, or 24 percent, occur on wildlife refuges.(See app. I for the listed species that occur on refuges.) Figure 2 shows the types of listed species found on refuges. As the figure shows, more than two-thirds of the species are plants, birds, and mammals. Fishes (27) Mammals (40) 9% Reptiles (19) Note 1: Percentages have been rounded. Note 2: The total number of species is 215. Note 3: “Other” includes amphibians (2), clams (6), crustaceans (1), insects (7), and snails (1). Some refuges represent a significant portion of a listed species’ habitat. According to FWS regional refuge officials, 66 refuges—encompassing a total of 26.7 million acres, including 22.6 million acres on two Alaska refuges—provide a significant portion of the habitat for 94 listed species. For example, Ash Meadows NWR in Nevada has 12 listed plants and animals that exist only at the refuge—the largest number of listed native species at one location in the United States. In addition, Antioch Dunes NWR in California contains virtually the entire remaining populations of three listed species—the Lange’s metalmark butterfly, the Antioch Dunes evening-primrose, and the Contra Costa wallflower. (App. II lists the refuges that provide a significant portion of a listed species’ habitat and the specific species that occur at these refuges.) Some listed species use the refuges on a temporary basis for migratory, breeding, and wintering habitat. As previously shown in figure 1, the refuges are often located along the primary north-south routes used by migratory birds. Migratory birds use the refuges as temporary rest-stops along their migration routes. The listed wood stork, for example, migrates in the spring from southern Florida to Harris Neck NWR in Georgia to nest in the refuge’s freshwater impoundments. In addition, several refuges provide breeding habitat for listed species. The Blackbeard Island and Wassaw refuges in Georgia and the Merritt Island NWR in Florida, for example, provide beach habitat for the listed loggerhead sea turtle to lay its eggs. Wildlife refuges and refuge staff contribute to the recovery of listed species in a variety of ways. Foremost, refuges provide secure habitat, which is often identified as a key component in the recovery of listed species. In addition, refuge staff carry out, as part of their refuge management activities, specific actions to facilitate the recovery of listed species. Refuge staff also participate in the development and review of recovery plans for listed species. One of the primary efforts for the recovery of listed species is to stabilize or reverse the deterioration of their habitat. Refuges contribute to the recovery of listed species by providing secure habitat. Our review of 120 recovery plans for listed species occurring on refuges disclosed that 80 percent of the plans identified securing habitat as an action needed to achieve species recovery. As of March 1994, the refuge system included about 91 million acres of wildlife habitat. FWS has acquired over 310,000 acres to create 55 new refuges specifically for the protection of listed species. FWS’ policy requires that a species recovery plan be prepared before lands are acquired for listed species. For example, the recovery plan for four Hawaiian waterbirds called for FWS to secure and manage a number of ponds and marshes that two or more of the waterbirds are known to use. One specific area described in the recovery plan, Kealia Pond, was subsequently acquired by FWS in 1992. However, overall we could not readily determine whether the acquisitions of lands for the 55 new refuges had been identified as needed acquisitions in species recovery plans. (App. III lists the refuges specifically established for listed species.) According to FWS’ data, listed species found on refuges, and specifically on refuges established to protect listed species, appear to have a more favorable recovery status than listed species that do not occur on refuges. Table 1 provides an overview of FWS’ data on the recovery status of listed species. This information was compiled on the basis of the knowledge and judgments of FWS staff and others familiar with the species. As the table shows, a greater proportion of the listed species that occur on refuges have a recovery status determined by FWS to be improving or stable than the listed species not found on refuges. According to FWS’ guidance, species whose recovery is improving are those species known to be increasing in number and/ or for which threats to their continued existence are lessening in the wild. Species whose recovery is stable are those known to have stable numbers over the recent past and for which threats have remained relatively constant or diminished in the wild. Declining species are those species known to be decreasing in number and/or for which threats to their continued existence are increasing in the wild. Refuge staff carry out a variety of activities that contribute to the recovery of listed species. According to FWS’ Refuges 2003: Draft Environmental Impact Statement, a total of 356 refuges had habitat management programs under way that directly benefited listed species. Refuge staff at the 15 refuges we visited were carrying out a number of specific actions in support of the protection and recovery of listed species. Such actions generally involved efforts to monitor the status of listed species’ populations at the refuges and carry out projects designed to restore and manage the habitats and the breeding areas of listed species. Examples of specific actions being taken included the following: Carrying out prescribed burning of vegetation at the Okefenokee NWR (Georgia). Among other things, such burning helps restore and facilitate the growth of longleaf pine trees—the primary habitat for the listed red-cockaded woodpecker. Enclosing nesting areas at the Salinas River NWR (California). The enclosures protect the listed western snowy plover’s nests and chicks from predation by red foxes. Undertaking protective actions at the Hakalau Forest NWR (Hawaii). Specifically, to protect and assist in the recovery of five listed forest birds, the refuge manager has restricted public use, fenced off the forest to keep out wild pigs and cattle, and created new nesting habitat for the listed birds by protecting indigenous plants and eliminating nonnative/exotic plants. Developing artificial nesting structures for wood storks at the Harris Neck NWR (Georgia). According to the refuge biologist, each structure at the refuge was occupied by up to three nests for these birds in both 1993 and 1994. Providing economic incentives to protect habitat and provide a food source for the listed bald eagle at Blackwater NWR (Maryland). Specifically, refuge management pays muskrat trappers to kill a rodent (the nutria) that is destroying the refuge wetlands. The carcasses are then left for bald eagles to eat. Managing vegetation growth to provide feeding pastures for the listed Columbian white-tailed deer at the Julia Butler Hansen Refuge for Columbian White-tailed Deer (Oregon and Washington). The vegetation in the deer’s feeding pastures is kept short by allowing cattle to graze on portions of refuge lands under cooperative agreements with local farmers. Refuge staff also participate on teams tasked with developing recovery plans for listed species. While the responsibility for developing and implementing the plans rests with FWS’ regional offices, recovery teams often include species experts from federal and state agencies (including the refuges), conservation organizations, and universities. For example, a biologist at the San Francisco Bay NWR is helping develop a revised recovery plan for the salt marsh harvest mouse, the California clapper rail (a species of bird), and other coastal California wetlands species. On the basis of their knowledge of the listed species, refuge staff are also asked to comment on draft recovery plans developed by others. For example, refuge staff at the Moapa Valley NWR in Nevada were asked to review the draft recovery plan for the Moapa dace (a species of fish) developed by a recovery team made up of representatives from a variety of organizations, including the Department of the Interior’s Bureau of Reclamation; the University of Nevada, Las Vegas; and the Nevada Division of Wildlife. Refuge staff at the locations we visited told us they use the recovery plans to guide their activities to protect listed species. They also told us that recovery plans are good reference tools and help outline the management actions necessary for species recovery. They noted, however, that recovery plans have their limitations—plans can become outdated quickly and that refuges often lack the funding necessary to undertake all of the prescribed recovery tasks. While refuge staff have taken some actions to protect and aid the recovery of listed species on their refuges, we found that efforts were at times not undertaken. According to refuge managers and staff, their ability to contribute to species recovery efforts are constrained by the level of available funding. Two 1993 Interior reports discussed overall concerns about refuge funding and concluded that refuge funding was inadequate to meet the missions of refuges. In its Refuges 2003: Draft Environmental Impact Statement, FWS reported that the refuge system’s current annual funding is less than half the amount needed to fully meet established objectives. From October 1, 1988, through fiscal year 1993, appropriations for the Division of Refuges increased from $117.4 million to $157.5 million per year. If the current level of annual funding continues, according to FWS, funding will be inadequate to address the existing backlog of major refuge maintenance projects or the programs and construction projects necessary for any expanded wildlife or public use activities. In addition, FWS stated that recent increases in refuge funding have not been sufficient to address the rising costs of basic needs, such as utilities, fuel, travel, and training. In August 1993, Interior’s Inspector General reported that “refuges were not adequately maintained because Service funding requests for refuge maintenance have not been adequate to meet even the minimal needs of sustaining the refuges.” According to the Inspector General, the maintenance backlog totaled $323 million as of 1992. The Inspector General also reported that “new refuges have been acquired with increased Service responsibilities, but additional sufficient funding was not obtained to manage the new refuges.” Between 1988 and 1992, according to the Inspector General, $17.2 million was necessary to begin operations at the 43 new refuges acquired during this period. However, only $4.7 million was appropriated for all new and expanded refuges. This appropriation level for refuge funding resulted in a $12.5 million deficit, according to the Inspector General, some of which contributed directly to the maintenance backlog. In response to the Inspector General’s findings, FWS has agreed to develop a plan to reduce refuges’ maintenance backlogs and to report on efforts to ensure consideration of the operations and maintenance costs in all future acquisitions. The budget resources are insufficient to undertake all of the efforts necessary to recover listed species, according to refuge managers. In general, refuge operations and maintenance budgets are earmarked for items such as salaries, utilities, and specific maintenance projects. As a consequence, many efforts to recover listed species are not being carried out. At 14 of the 15 locations we visited, refuge managers and staff said funding constraints limited their ability to fully implement recovery actions for listed species and other protection efforts. For example, refuge staff at the Savannah Coastal Refuge Complex in Georgia explained that they have enough resources to conduct only one survey of the bald eagle population per year, rather than the three they feel are necessary to adequately monitor the eagle’s status. A biologist at the San Francisco Bay Refuge Complex reported that no money is available to conduct genetic studies on the listed salt marsh harvest mouse, even though such studies are called for in the species recovery plan. In commenting on a draft of this report, the Assistant Secretary for Fish and Wildlife and Parks, Department of the Interior, generally concurred with the findings (app. IV contains Interior’s comments). In particular, the Assistant Secretary stated that funding limitations constrain the National Wildlife Refuge System’s ability to fully protect and recover endangered species; however, in light of other budgetary priorities, refuges have been funded at the highest affordable level. The Assistant Secretary also provided a number of comments that were technical in nature. In response, we revised the report, where appropriate, to refer to all components of the National Wildlife Refuge System rather than just the refuges and made other editorial changes. We conducted our work between May 1993 and July 1994 in accordance with generally accepted government auditing standards. To obtain information on FWS’ policies and procedures for refuges and implementation of the ESA, we reviewed relevant FWS documents, including the May 1990 Policy and Guidelines for Planning and Coordinating Recovery of Endangered and Threatened Species; the Refuge Manual; Refuges 2003: Draft Environmental Impact Statement; the 1990 and draft 1992 Report to Congress: Endangered and Threatened Species Recovery Program; and 120 species recovery plans. We also interviewed officials at the Division of Refuges and Division of Endangered Species at FWS headquarters and at the FWS Portland regional office. In addition, we visited and met with officials from 15 refuges—including refuges created specifically for listed species and those that were created for other purposes—to determine how each refuge contributed to recovery efforts for listed species. The 15 refuges included, in California, Antioch Dunes, San Francisco Bay, and San Pablo Bay; in Georgia, Harris Neck and Okefenokee; in Hawaii, Hanalei, Huleia, James C. Campbell, Kilauea Point, and Pearl Harbor; in Maryland, Blackwater; in Maryland and Virgina, Chincoteague; in Nevada, Ash Meadows, Moapa Valley; and in Oregon and Washington, Julia B. Hansen Columbian White-tailed Deer. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies to the Secretary of the Interior; the Assistant Secretary, Fish and Wildlife and Parks; and the Director of the Fish and Wildlife Service. We will also make copies available to others on request. Please call me at (202) 512-7756 if you or your staff have any questions. Major contributors to this report are listed in appendix V. As of April 1994, the number of listed animal and plant species occuring on wildlife refuges totaled 215. As of June 30, 1994, recovery plans had been approved for 157 of these species (as indicated by an asterisk). Cambarus aculabrum (crayfish with no common name) *Cavefish, Ozark *Chub, bonytail *Chub, humpback Chub, Oregon Chub, Yaqui *Dace, Ash Meadows speckled *Dace, Moapa *Darter, watercress *Gambusia, Pecos Madtom, Pygmy Minnow, Rio Grande Silvery *Poolfish (killifish), Pahrump *Pupfish, Ash Meadows amargosa *Pupfish, Devils Hole *Pupfish, Warm Springs *Shiner, Pecos bluntnose *Squawfish, Colorado *Sucker, Lost River Sucker, razorback *Sucker, short-nose *Topminnow, Gila (including Yaqui) Aleutian Canada goose, Aleutian shield-fern Ozark cavefish, Gray bat, Indiana bat, Cambarus aculabrum (crayfish with no common name) Yaqui topminnow, Yaqui chub, Yaqui catfish, Beautiful shiner Lange’s metalmark butterfly, Contra Costa wallflower, Antioch Dunes evening primrose Lost River and short-nosed suckers Light-footed clapper rail, California least tern Light-footed clapper rail, California least tern Loggerhead, green, leatherback, and hawksbill sea turtles American crocodile, Key Largo cotton mouse, Key Largo woodrat Rice (silver rice) rat Loggerhead and green sea turtles (continued) Chincoteague (also in Virginia) Mississippi Sandhill Crane Mississippi sandhill crane (continued) Black-footed ferret (to be reintroduced) Julia Butler Hansen Refuge for Columbian White-tailed Deer (also in Washington) Chincoteague (also in Maryland) (continued) Julia Butler Hansen Refuge for Columbian White-tailed Deer (also in Oregon) Indiana bat, gray bat Indiana bat, gray bat Gila (Yaqui) topminnow, Yaqui chub, Peregrine falcon Gila (Yaqui) topminnow, Yaqui chub, Yaqui catfish, beautiful shiner Lange’s metalmark butterfly, Antioch Dunes evening-primrose, Contra Costa wallflower Valley elderberry longhorn beetle, bald eagle, least bell’s vireo California clapper rail, California least tern, salt marsh harvest mouse Light-footed clapper rail, California least tern Loggerhead and green sea turtles Loggerhead and green sea turtles (continued) Dusky seaside sparrow (extinct) Julia Butler Hansen Refuge for Columbian White-tailed Deer (also in Washington) (continued) Julia B. Hansen Refuge for Columbian White-tailed Deer (also in Oregon) Kim Gianopoulos The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO provided information on the Fish and Wildlife Service's (FWS) National Wildlife Refuge System, focusing on the extent to which wildlife refuges contribute to the protection and recovery of endangered species. GAO found that: (1) of about 900 endangered species, 215 occur or have habitat on national wildlife refuges; (2) the endangered species found on wildlife refuges represent a diversity of wildlife; (3) although many listed endangered species inhabit wildlife refuges, many other endangered species use refuge lands temporarily for breeding or migratory rest stops; (4) FWS refuges contribute to the protection and recovery of endangered species by providing safe and secure habitats, implementing recovery projects that are tailored to each endangered species, and identifying specific actions that can contribute to species recovery; (5) FWS efforts to manage wildlife refuges have been inhibited because funding levels have not kept pace with the increasing costs of managing new or existing refuges; and (6) at 14 of the 15 locations reviewed, refuge managers and staff believed that funding constraints limited their ability to enhance habitat and facilitate the recovery of endangered species. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Biomonitoring—one technique for assessing people’s exposure to chemicals—involves measuring the concentration of chemicals or their by- products in human specimens, such as blood or urine. Biomonitoring has been used to monitor certain workers’ lead exposure for many decades. More recently, advances in analytic methods have allowed scientists to measure more chemicals, in smaller concentrations, using smaller samples of blood or urine. As a result, biomonitoring has become more widely used for a variety of applications, including public health research and measuring the impact of certain environmental regulations, such as the decline in blood lead levels following declining levels of gasoline lead. The CDC began collecting health statistics on the U.S. population through its National Health and Nutrition Examination Survey (NHANES) in 1971. This effort evolved over time to include the CDC collecting biomonitoring data in 1976, but only for a handful of chemicals, such as lead and certain pesticides. In 1999, the CDC substantially increased the number of chemicals in the biomonitoring component of the program to 116 and began analyzing and reporting these biomonitoring data in its versions of the National Report on Human Exposure to Environmental Chemicals. These three reports have provided a window into the U.S. population’s exposure to chemicals, and the CDC continues to develop new methods for collecting data on additional chemical exposures with each report. The NHANES design does not select or exclude participants on the basis of their potential for low or high exposure to a chemical. The current design of the biomonitoring program does not permit examination of exposure levels by locality, state, or region; seasons of the year; proximity to sources of exposure; or use of particular products. For example, it is not possible to extract a subset of the data and examine levels of blood lead that represent levels in a particular state’s population. Some specific uses of data from the CDC’s biomonitoring program are to determine which chemicals are present in individuals in the U.S. population, and at what concentrations; determine, for chemicals with a known toxicity level, the prevalence of people with levels above those toxicity levels; establish reference ranges that can be used by physicians and scientists to determine whether a person or group has an unusually high exposure; assess the effectiveness of public health efforts to reduce exposure of individuals to specific chemicals; determine whether exposure levels are higher among minorities, children, women of childbearing age, or other potentially vulnerable groups; track, over time, trends in levels of exposure of the population; and set priorities for research on human health effects. Some states have enacted local biomonitoring programs to identify and address health concerns. For example, Alaska is collecting women’s hair samples to test them for mercury and is supplementing those data with information on the women’s fish consumption and data on local fish mercury levels collected by the U.S. Fish and Wildlife Service. As another example, California is planning how to implement a statewide biomonitoring program and is currently selecting which chemicals to include in the program. As more data have become available regarding the general population’s exposure to a variety of commercial chemicals, public concerns have been aroused over the health risks posed by exposures to chemicals, such as flame retardants used in furniture or common pesticides used in and around the home. However, the utility and interpretation of biomonitoring data remain controversial, and the challenge for environment and health officials is to understand the health implications and to craft the appropriate policy responses. For decades, government regulators have used a process called “risk assessment” to understand the health implications of commercial chemicals. Researchers use this process to estimate how much harm, if any, can be expected from exposure to a given contaminant or mixture of contaminants, and to help regulators determine whether the risk is significant enough to require banning or regulating the chemical or other corrective action. The National Academy of Sciences—a private, nonprofit institution that provides science, technology, and health policy advice under a congressional charter—described the four stages of health risk assessment in 1983. The first stage is hazard identification, the determination of whether a particular chemical is or is not causally linked to particular health effects. The second stage is dose-response assessment, which involves determining the relationship between the magnitude of exposure to a contaminant and the probability and severity of adverse effects. These two stages generally involve studies that expose animals to high doses of a chemical and observe the adverse effects. The third stage is exposure assessment—that is, identifying the extent to which exposure is likely to occur. For this stage, risk assessors generally use data on chemical concentrations in the air, water, food, or other environmental media, combined with assumptions about how and at what rate the body is exposed to or absorbs the chemicals. Risk assessors also use assumptions about human behavior based on observational studies—such as the time spent outdoors or, for children, the amount of time spent on the floor—to better estimate an individual’s true exposure. The fourth stage of the health risk assessment process is risk characterization—that is, combining the information from the first three stages into a conclusion about the nature and magnitude of the risk, including attendant uncertainty. These assessments typically result in the creation of chemical-specific “reference values” that are based on an intake level or a concentration in an environmental medium. An example of such a reference value is a “reference dose,” which is an estimate (with uncertainty spanning perhaps an order of magnitude) of a daily oral exposure to the human population (including sensitive subgroups) that is likely to be without an appreciable risk of deleterious effects during a lifetime. A reference dose can be derived from a no observable adverse effect level (NOAEL), lowest observed adverse effect level, or benchmark dose, with uncertainty factors generally applied to reflect limitations of the data used. Uncertainty factors are used to account for interspecies extrapolation, and intraspecies variation, and, in some cases, to account for the duration of the study or a lack of a NOAEL. In addition, some legislation is based on the default assumption that children may be more sensitive to chemicals than adults. For example, the Food Quality Protection Act requires a 10-fold safety factor to protect children. Biomonitoring research is difficult to integrate into this risk assessment process, since estimates of human exposure to chemicals have historically been based on the concentration of these chemicals in environmental media and on information about how people are exposed. Biomonitoring data, however, provide a measure of internal dose that is the result of exposure to all environmental media and depend on how the human body processes and excretes the chemical. To integrate biomonitoring into traditional risk assessment, researchers must determine how to correlate this internal exposure with their prior understanding of how external exposure affects human health. Although the CDC has been the primary agency collecting biomonitoring data, EPA has specific authority to assess and manage chemical risks, often in coordination with other federal agencies. Several EPA offices are involved in collecting chemical data and assessing chemical risks. The Office of Pollution Prevention and Toxics (OPPT) manages programs under TSCA. The act provides EPA with the authority to collect information about chemical substances or, upon making certain determinations, to require companies to develop information and take action to control unreasonable risks by either preventing or limiting the introduction of dangerous chemicals into commerce or by placing restrictions on those already in the marketplace. TSCA also creates an Interagency Testing Committee to recommend chemicals for priority consideration for further testing to EPA. Furthermore, the EPA Administrator is specifically directed to coordinate with the Department of Health and Human Services and other federal agencies to conduct research, development, and monitoring as necessary to carry out the purposes of TSCA, and to establish and coordinate a system for exchange among federal, state, and local authorities of research and development results respecting toxic chemicals. The Office of Pesticide Programs (OPP) manages programs under the Federal Insecticide, Fungicide, and Rodenticide Act and the Federal Food, Drug, and Cosmetic Act, which require that EPA review pesticide risks to the environment before allowing a pesticide to be sold or distributed in the United States, and to set maximum pesticide residue levels allowed in or on food. Risk assessment activities at EPA are carried out by the agency’s Office of Research and Development (ORD)—its principal scientific and research arm—and its program and regional offices, including the Office of Air and Radiation, OPP, OPPT, and the Office of Water. ORD’s role is to provide program and regional office scientific advice and information for use in developing and implementing environmental policies, regulations, and practices. In fulfilling this role, ORD issues guidance documents for risk assessors, such as its Exposure Factors Handbook, and conducts and funds research aimed at addressing data gaps and reducing scientific uncertainty. This research is divided into two categories: core research and problem-driven research. Core research seeks to produce a fundamental understanding of the key biological, chemical, and physical processes that underlie environmental systems, thus forging basic scientific capabilities that can be applied to a wide range of environmental problems. Core research addresses questions common to many EPA programs and provides the methods and models needed to confront unforeseen environmental problems. Problem-driven research, however, focuses on regulatory, program office, or regional needs and may focus on specific pollutants or the development of models or methods to address specific questions. EPA makes limited use of current biomonitoring studies because such studies cover relatively few chemicals, and EPA rarely knows whether the measured amounts in people indicate a risk to human health. Nonetheless, EPA has taken action in a few cases, when biomonitoring studies showed that people were widely exposed to a chemical that appeared to pose health risks. The CDC’s biomonitoring program provides the most comprehensive biomonitoring data relevant to the U.S. population. The results of the program are summarized in three versions of the National Report on Human Exposure to Environmental Chemicals. The latest report, issued in 2005, covered 148 chemicals, and the forthcoming version in 2009 will provide data on about 250 chemicals. However, there are over 83,000 chemicals on the TSCA Chemical Substance Inventory. Of those chemicals, EPA focuses on screening and prioritizing the more than 6,200 chemicals that companies produce in quantities of more than 25,000 pounds per year at one site. About 3,000 of these 6,200 chemicals are produced at more than 1 million pounds per year in total. Current biomonitoring efforts also provide little information on children. Large-scale biomonitoring studies generally omit children because it is difficult to collect biomonitoring data from them. For example, some parents are concerned about the invasiveness of taking blood samples from their children, and certain other fluids, such as umbilical cord blood or breast milk, are available only in small quantities and only at certain times. When samples are available from children, they may not be large enough to analyze because the test requires more fluids than is available because of the reasons we have previously mentioned. In other cases, the sampling effort uses the sample for other purposes. For example, the CDC collects samples through its health and nutrition survey, but uses these samples to study biological indicators related to nutrition, such as the amount of water soluble or fat soluble vitamins, iron, or trace elements. Thus, the only biomonitoring analysis that the CDC has performed on samples from children under 6 are for cadmium, lead, mercury, cotinine— a by-product of tobacco smoke—and certain perfluorinated chemicals. Even if biomonitoring information is available for a chemical, it is often of limited use. EPA indicated that it often lacks the additional information needed to make biomonitoring results useful for risk assessment. Biomonitoring provides information only on the level of a chemical in a person’s body. The detectable presence of a chemical in a person’s blood or urine may not mean that the chemical causes disease. While exposure to larger amounts of a chemical may cause an adverse health impact, a smaller amount may be of no health consequence. In addition, biomonitoring data alone do not indicate the source, route, or timing of the exposure, making it difficult to identify the appropriate risk management strategies. As a result, EPA has made few changes to its chemical risk assessments or safeguards in response to the recent proliferation of biomonitoring data. For most chemicals, additional data on health effects; on the sources, routes, and timing of exposure; and on the fate of a chemical in the human body would be needed to incorporate biomonitoring into risk assessment. However, as we have discussed in prior reports, EPA will face difficulty in using its authorities under TSCA to require chemical companies to develop health and safety information on the chemicals they produce. We have designated the assessment and control of toxic chemicals as a “high-risk” area of government that requires broad-based transformation. EPA has used some biomonitoring data in chemical risk assessment and management, but only when additional studies have provided insight on the health implications of the biomonitoring data. For example, EPA used both biomonitoring and traditional risk assessment information to take action on certain perfluorinated chemicals. These chemicals are used in the manufacture of consumer and industrial products, including nonstick cookware coatings; waterproof clothing; and oil-, stain-, and grease- resistant surface treatments. In 1999, EPA began an investigation after receiving biomonitoring data from a chemical company indicating that perfluorooctanesulfonic acid (PFOS) was found in the general population. Further testing showed that PFOS also was persistent in the environment, was unexpectedly toxic, tended to accumulate in the human body, and was present in low concentrations in the blood of the general population and wildlife worldwide. The principal PFOS manufacturer voluntarily phased out its production in 2002, and EPA then required manufacturers and importers to notify EPA 90 days before manufacturing or importing PFOS and PFOS-related chemicals for certain new uses. In addition, in September 2002, EPA initiated a review of perfluorooctanoic acid (PFOA)—another perfluorinated chemical. The agency cited biomonitoring data indicating widespread human exposure in the United States, and animal toxicity studies that linked PFOA exposure to developmental effects on the liver and the immune system. EPA has sought to work with multiple parties to produce missing information on PFOA through the negotiation of enforceable consent agreements, memorandums of understanding, and voluntary commitments. In 2006, EPA also launched the a 2010/15 PFOA Stewardship Program, in which eight companies voluntarily committed to reduce facility emissions and product content of PFOA and related chemicals by 95 percent no later than 2010, and to work toward eliminating emissions and product content by 2015. EPA also used biomonitoring data in a few other cases. In the 1980s, EPA was considering whether to make permanent a temporary ban on lead in gasoline. National data on lead exposure showed a decline in average blood lead levels that corresponded to the declining amounts of lead in gasoline. On the basis of these data and other information, EPA strengthened its restrictions on lead. In the 1990s, EPA used biomonitoring studies to develop a reference dose for methylmercury, a neurotoxin. Mercury occurs naturally and in industrial pollution. In water, it can turn into methylmercury and then accumulate in fish. These studies showed that elevated levels of mercury in women’s hair and their infants’ umbilical cord blood correlated with adverse neurological effects when the children reached aged 6 or 7 years. In its fiscal year 2008 Performance and Accountability Report, EPA used results from biomonitoring studies to track its performance in reducing blood levels of lead, mercury, certain pesticides, and polychlorinated biphenyls. Furthermore, EPA used biomonitoring data in evaluating the safety of two pesticides: triclosan in 2008 and chlorpyrifos in 2006. Finally, EPA officials told us that the agency may adopt the use of biomonitoring data as a tool to evaluate the long- term outcomes of risk mitigation efforts. EPA has several biomonitoring research projects under way, but the agency has no system in place to track progress or assess the resources needed specifically for biomonitoring research. EPA also does not separately track spending or staff time devoted to biomonitoring research. Instead, it places individual biomonitoring research projects within its larger Human Health Research Strategy. While this strategy includes some goals relevant to biomonitoring, EPA has not systematically identified and prioritized the data gaps that prevent it from using biomonitoring data. Nor has it systematically identified the resources needed to reach biomonitoring research goals or identified which chemicals most need additional biomonitoring-related research. EPA intends to revise its Human Health Research Strategy for 2009, and it said that it may include a greater focus on how the agency can interpret biomonitoring data and use them in risk assessments. Also, EPA lacks a coordinated national strategy for the many agencies and other groups involved in biomonitoring research, which could impair its ability to address the significant data gaps in this field of research. In addition to the CDC and EPA, several other federal agencies have been involved in biomonitoring research, including the Agency for Toxic Substances and Disease Registry, the Occupational Safety and Health Administration, and entities within the National Institutes of Health (NIH). Several states have also initiated biomonitoring programs to examine state and local health concerns, such as arsenic in local water supplies or populations with high fish consumption that may increase mercury exposure. Furthermore, some chemical companies have for decades monitored their workforce for chemical exposure, and chemical industry associations have funded biomonitoring research. Finally, some environmental organizations have conducted biomonitoring studies of small groups of adults and children, including one study on infants. A national biomonitoring research plan could help better coordinate research and link data needs with collection efforts. EPA has suggested chemicals for future inclusion in the CDC’s National Biomonitoring Program, but has not gone any further toward formulating an overall strategy to address data gaps and ensure the progress of biomonitoring research. We have previously noted that to begin addressing the need for biomonitoring research, federal agencies will need to strategically coordinate their efforts and leverage their limited resources. Similarly, the National Academy of Sciences found that the lack of a coordinated research strategy allowed widespread exposures to go undetected, including exposures to PFOA and flame retardants known as polybrominated diphenyl ethers. The academy noted that a coordinated research strategy would require input from various agencies involved in biomonitoring and supporting disciplines. In addition to EPA, these agencies include the CDC, NIH, the Food and Drug Administration, and the U.S. Department of Agriculture. Such coordination could strengthen efforts to identify and possibly regulate the sources of the exposure detected by biomonitoring, since the most common sources—that is, food, environmental contamination, and consumer products—are under the jurisdiction of different agencies. EPA has taken some promising steps to address data gaps relevant to biomonitoring, which we discuss in the remaining paragraphs of this section. For example, EPA has funded research to address certain links between chemical exposure, biomonitoring measurements, and health effects. The agency worked with NIH to establish and fund several Centers for Children’s Environmental Health and Disease Prevention Research (Children’s Centers). One of these centers is conducting a large-scale study exploring the environmental and genetic causes of autism, and plans to use various types of biomonitoring data collected from parents and children to quantify chemical exposures and examine whether samples from children with autism contained different biomarkers than samples from children without autism. EPA’s Children’s Health Protection Advisory Committee stated that EPA’s Children’s Centers program represents an excellent investment that provides both short- and long-term benefits to children’s health. In addition, EPA also awards grants that are intended to advance the knowledge of children’s exposures to pesticides through the use of biomarkers, and of the potential adverse effects of these exposures. The grants issued went to projects that, among other things, investigated the development of less invasive biomarkers for common pesticides, related biomarkers to indices of early neurological development, and analyzed the association between pesticide levels in environmental samples and pesticide body burdens. According to EPA, this research has helped the agency to better assess children’s exposure to chemicals and assess the risk of certain pesticides. Furthermore, EPA pursues internal research to develop and analyze biomonitoring data. For example, EPA has studied the presence of the herbicide 2, 4-D in 135 homes with preschool-age children by analyzing soil, outdoor air, indoor air, carpet dust, food, urine, and samples taken from subjects’ hands. The study shed important light on how best to collect urine samples that reflect an external dose of the herbicide. It is also helping EPA researchers develop models that simulate how the body processes specific chemicals, which will help them understand the links between biomonitoring data and initial sources and routes of chemical exposure. In another area of research, EPA has partially implemented a National Academy of Sciences recommendation by collecting biomonitoring data during some animal toxicology studies. Collecting this information allows EPA to relate animal biomonitoring data to animal health effects, which is likely to be useful in interpreting human biomonitoring data. However, EPA does not routinely collect this information. Finally, EPA has collaborated with other agencies and industry on projects that may improve the agency’s ability to interpret and use biomonitoring data. For example, EPA collaborated with other federal agencies in the development of the National Children’s Study, a long-term study of environmental and genetic effects on children’s health, which is slated to begin collecting data later in 2009. The agency proposes to examine the effects of environmental influences on the health and development of approximately 100,000 children across the country, following them from before birth until age 21. Several researchers have noted that since the study is slated to collect biomonitoring samples and data on environmental exposures in the home while tracking children’s health status, the study would provide a unique opportunity to address data gaps and begin linking external exposure sources, biomonitoring measurements, and health outcomes. However, the study depends upon a sustained funding commitment, which it has not yet received, and the National Academy of Sciences has noted concerns regarding funding uncertainty. In a separate effort, EPA cosponsored a private consultant’s pilot project to create “biomonitoring equivalents” for four chemicals. These are biomonitoring measurements intended to have a well- understood relationship to existing measures of exposure, such as oral reference doses. This relatively new concept could help better interpret the biomonitoring results for these and other chemicals and could highlight when additional research and analysis are needed. EPA has other programs that it uses to gather additional chemical test data or to gather production and use information from companies, but these programs are not designed to interpret biomonitoring data. We discuss some of these programs in more detail in appendix II. EPA’s authorities under TSCA to obtain biomonitoring data are generally untested. While our analysis of the relevant TSCA provisions and of recent administrative action suggests that EPA may be able to craft a strategy for obtaining biomonitoring data under some provisions of TSCA, EPA has not determined the full extent of its authority or the full extent of chemical companies’ responsibilities with respect to biomonitoring. Several provisions of TSCA address data development and reporting. These relevant provisions are shown in table 1 and detailed in the text that follows. Under section 4 of TSCA, EPA can require chemical companies to test chemicals for their effects on health or the environment, but this process is difficult, expensive, and time-consuming. To require testing, EPA must determine that there are insufficient data to reasonably determine or predict the effects of the chemical on health or the environment, and that testing is necessary to develop such data. The agency must also make one of two additional findings. The first is that a chemical may present an unreasonable risk of injury to human health or the environment. The second is that a chemical is or will be produced in substantial quantities, and that either (1) there is or may be significant or substantial human exposure to the chemical or (2) the chemical enters or may reasonably be anticipated to enter the environment in substantial quantities. EPA has said that it could theoretically require the development of biomonitoring data under section 4 of TSCA, but the agency’s authority to do so has not yet been tested. Generally, section 4 allows EPA, if it makes the necessary findings, to promulgate a “test rule” requiring a company to “develop data with respect to the health and environmental effects for which there is an insufficiency of data.” Biomonitoring data indicate only the presence of a chemical in a person’s body, and not its impact on the person’s health. However, EPA told us that biomonitoring data may in some cases demonstrate chemical characteristics—such as persistence, uptake, or fate—that could be relevant to the health and environmental effects of the chemical. Section 4 lists several chemical characteristics as items for which EPA can prescribe standards for development under a test rule, explicitly including persistence but also including any other characteristic that may present an unreasonable risk. Although biomonitoring may not be the only way to demonstrate persistence, uptake, or fate, section 4 also authorizes EPA to prescribe certain methodologies for conducting tests under a test rule, including but not limited to epidemiologic studies, serial or hierarchical tests, in vitro tests, and whole-animal tests. Biomonitoring is not a listed methodology, but EPA stated it could publish a standard test guideline for using biomonitoring as a methodology for obtaining data on health effects and chemical characteristics, or it could include biomonitoring in a section 4 test rule where warranted. Sections 5(a) and 5(b) of TSCA may be of limited use to EPA in obtaining biomonitoring data from chemical companies. Specifically, section 5(a) requires chemical companies to notify EPA at least 90 days before beginning to manufacture a new chemical or before manufacturing or processing a chemical for a use that EPA has determined by rule is a significant new use. The notice provided by the company must include “any test data in the possession or control of the person giving such notice which are related to the effect of any on health or the environment,” as well as “a description of any other data concerning the environmental and health effects of such substance, insofar as known to the person making the notice or insofar as reasonably ascertainable.” As we have previously described, EPA told us that data concerning “environmental and health effects” could include biomonitoring data. While a notice under section 5 may include test data required to be developed under a section 4 test rule, section 5(b) does not provide independent authority for EPA to require the development of any new data. Thus, section 5(b) can only be used by EPA to obtain data that the chemical companies have on hand. EPA has noted that companies are particularly unlikely to have biomonitoring data for new chemicals on hand because there is little opportunity for exposure to the chemical prior to full-scale manufacture. Under certain circumstances, EPA may be able to indirectly require the development of new test data using the leverage that it has under section 5(e) to limit the manufacture of chemicals, although the agency has never attempted to do so. Under section 5(e), when a company proposes to begin manufacturing a new chemical or to introduce an existing chemical for a significant new use, EPA may determine (1) that the available information is not sufficient to permit a reasoned evaluation of the health and environmental effects of that chemical and (2) that in the absence of such information, the manufacture of the chemical may meet certain risk or exposure thresholds. If the agency does so, the Administrator can issue a proposed order limiting or prohibiting the manufacture of the chemical. If a chemical company objects to such an order, the matter becomes one for the courts. If a court agrees with the Administrator, it will issue an injunction to the chemical company to limit or prohibit manufacture of the chemical. If and when the chemical company submits data to EPA sufficient for the Administrator to make a reasoned determination about the chemical’s health and environmental effects, which may include test data, the injunction can be dissolved. Thus, an injunction would provide an incentive for the chemical company to develop testing data. Also under this section, EPA sometimes issues a consent order that does not prohibit the manufacture of the chemical, but subjects it to certain conditions, including additional testing. EPA typically uses such consent orders to require testing of toxic effects and a chemical’s fate in the environment. While EPA may not be explicitly authorized to require the development of such test data under this section, chemical companies have an incentive to provide the requested test data to avoid a more sweeping ban on a chemical’s manufacture. EPA has not indicated whether it will use section 5(e) consent orders to require companies to submit biomonitoring data. “. . . any study of any effect of a chemical substance or mixture on health or the environment or on both, including underlying data and epidemiological studies, studies of occupational exposure to a chemical substance or mixture, toxicological, clinical, and ecological studies of a chemical, substance or mixture, and any test performed pursuant to this chapter.” While the agency has no formal position on whether biomonitoring data can be obtained under section 8(d), an EPA official stated that this provision authorizes the agency to promulgate a rule requiring a company to submit existing biomonitoring data. EPA explained that the presence of a chemical in blood or tissues of workers could indicate occupational exposure to the chemical, qualifying such information as reportable under this section. Section 8(e) has in recent years garnered more attention than any other section of TSCA as a potential means of collecting biomonitoring information, but this potential remains unclear. Section 8(e) requires chemical companies, on their own initiative, to report to EPA any information they have obtained that reasonably supports the conclusion that a chemical presents a substantial risk of injury to health or the environment. “Substantial risk” is currently defined by EPA in nonbinding guidance as “a risk of considerable concern because of (a) the seriousness of the effect, and (b) the fact or probability of its occurrence.” EPA asserts that biomonitoring data are reportable as demonstrating a substantial risk if the chemical in question is known to have serious toxic effects and the biomonitoring data indicate a level of exposure previously unknown to EPA. However, this is the extent of EPA’s current guidance on the subject. Industry has asked for expanded guidance covering specific criteria for when biomonitoring data are reportable, specific guidance on the reportability of occupational biomonitoring results versus biomonitoring results from the general population, and factors that would render biomonitoring data unreportable. EPA has not yet revised its guidance in response to industry request. This difficulty of enforcement is highlighted by the history leading up to an EPA action against the chemical company E. I. du Pont de Nemours and Company (DuPont). Until 2000, DuPont used the chemical PFOA to make Teflon® at a plant in West Virginia. In 1981, DuPont took blood samples of several female workers and two babies born to those workers. The levels of PFOA in the blood from the babies showed a measurable amount of PFOA crossed the placental barrier. DuPont moved its female employees away from work in areas of the plant where PFOA was used. However, after conducting additional animal testing, DuPont concluded that the exposure levels associated with workers posed no reproductive risks and moved the women back into these areas. DuPont did not report the human blood sampling results to EPA, even when EPA requested all toxicology data associated with PFOA. DuPont also did not report to EPA the results of blood testing of 12 people living near the plant, 11 of whom had never worked in the plant and had elevated levels of PFOA in their blood. EPA initially received the 1981 blood sampling information from counsel for a class action lawsuit by citizens living near the West Virginia facility. DuPont argued that none of the blood sampling information was reportable under TSCA because the mere presence of PFOA in workers’ and community members’ blood did not itself support the conclusion that exposure to PFOA posed any health risks. EPA subsequently filed two actions against DuPont for violating section 8(e) of TSCA by failing to report the biomonitoring data, among other claims. In December 2005, EPA and DuPont settled both of these actions. DuPont did not admit that it should have reported the biomonitoring data, but it agreed to a settlement totaling $16.5 million. Furthermore, EPA used the biomonitoring data it received in a subsequent risk assessment, which was reviewed by the Science Advisory Board, together with other information that was available at that time. Upon review, the board suggested that the PFOA cancer data are consistent with the category of “likely to be carcinogenic to humans” described in EPA Guidelines for Carcinogen Risk Assessment. As a result of this finding and other concerns associated with PFOA and PFOA-related chemicals, DuPont finally agreed to phase out the use of PFOA by 2015, in tandem with seven other companies. Thus, while EPA ultimately succeeded in using TSCA to remove PFOA from the market, it encountered great difficulty in doing so—that is, even when biomonitoring data, coupled with animal toxicity studies, arguably helped point out serious risks to human health associated with PFOA, DuPont’s position was that section 8(e) did not require it to submit the biomonitoring data it had collected on PFOA. DuPont did not provide the biomonitoring data on its own initiative, and EPA may never have received these data if they had not been originally provided by a third party. Without the biomonitoring information, EPA may never have completed the risk assessment that led to the phaseout of PFOA. Biomonitoring provides new insight into the general population’s exposure to chemicals. However, scientists have linked biomonitoring data with human health effects for only a handful of chemicals to date. As the volume of biomonitoring data continues to increase, EPA will need to strategically plan future research that links environmental contamination, biomonitoring measurements of exposure, and adverse health effects. The nation thus far has no long-term strategy to coordinate the biomonitoring research that EPA and other stakeholders perform. Nor does the agency gather reliable information on the amount of resources needed for addressing data gaps and incorporating biomonitoring research results into its chemical risk assessment and management programs. In addition, while federal agencies and other stakeholders could pursue various methods to address biomonitoring data gaps, such as routinely collecting biomonitoring in animal toxicology studies, coordination and agreements among EPA and the various other entities are needed to systematically pursue these options. A national biomonitoring research strategy could enhance the usefulness of biomonitoring data by identifying linkages between data needs and collection efforts and providing a framework for coordinating research efforts and leveraging stakeholder expertise. One of the first steps in interpreting biomonitoring data is to better understand how chemicals impact human health, including how we might be exposed to them and what levels of exposure pose a risk. However, information is sparse on how people are exposed to commercial chemicals and on the potential health risks for the general population. We have previously noted that EPA faces challenges in using TSCA to obtain the information needed to assess the risks of chemicals. These challenges also affect EPA’s ability to require that chemical companies provide biomonitoring data. Such data can provide additional insights on exposure levels and susceptible populations. However, EPA has not determined the extent of its authority to require a company to develop and submit biomonitoring data that may aid EPA in assessing chemicals’ risks, and EPA has not developed regulations or formal guidance concerning the conditions under which biomonitoring data might be required. While EPA has attempted to get additional information on chemical risks from voluntary programs, such programs have had mixed results and are unlikely to be a complete substitute for a more robust chemical regulatory program. To ensure that EPA effectively obtains the information needed to integrate biomonitoring into its chemical risk assessment and management programs, coordinates with other federal agencies, and leverages available resources for the creation and interpretation of biomonitoring research, we recommend that the EPA Administrator take the following two actions: Develop a comprehensive biomonitoring research strategy that includes the data EPA needs to incorporate biomonitoring information into chemical risk assessment and management activities, identifies federal partners and efforts that may address these needs, and quantifies the time frames and resources needed to implement the strategy. Such a strategy should identify and prioritize the chemicals for which biomonitoring data or research is needed, categorize existing biomonitoring data, identify limitations in existing data approaches, identify and prioritize data gaps, and estimate the time and resources needed to implement this strategy. Assess EPA’s authority to establish an interagency task force that would coordinate federal biomonitoring research efforts across agencies and leverage available resources, and establish such a task force if it determines that it has the authority. If EPA determines that further authority is necessary, it should request that the Executive Office of the President establish an interagency task force (or other mechanism as deemed appropriate) to coordinate such efforts. In addition, to ensure that EPA has sufficient information to assess chemical risks, the EPA Administrator should take the following action: Determine the extent of EPA’s legal authority to require companies to develop and submit biomonitoring data under TSCA. EPA should request additional authority from the Congress if it determines that such authority is necessary. If EPA determines that no further authority is necessary, it should develop formal written policies explaining the circumstances under which companies are required to submit biomonitoring data. We provided a draft of this report to the EPA Administrator for review and comment. EPA generally agreed with our first two recommendations, and did not disagree with the third, but it provided substantive comments on its implementation. We present EPA’s written comments in appendix III. EPA also provided technical comments, which we incorporated into the report as appropriate. The following paragraphs summarize EPA’s comments and our responses. While EPA agreed that it should develop a comprehensive biomonitoring research strategy, the agency noted that its research program is addressing important questions relevant to interpreting biomonitoring data. We agree that EPA is conducting important biomonitoring related research. However, as noted in our report, while EPA has biomonitoring research projects under way, it has no system in place to track overall progress or assess the resources needed specifically for biomonitoring research. EPA also agreed that an interagency task force is needed to coordinate federal biomonitoring research, and says that such a task force should be developed under the auspices of the Office of Science and Technology Policy. We do not disagree with this approach. EPA said that our report underemphasized the importance of considering assumptions about human behavior and the need to collect biomonitoring data for young children. We agree that EPA needs to consider human behavior and other factors that impact human health risk, and we note in the report that EPA uses assumptions about human behavior on the basis of observational studies—such as the time spent outdoors or, for children, the amount of time spent on the floor—to better estimate an individual’s true exposure. We also note that current biomonitoring efforts provide little information on children and that children may be more vulnerable to certain chemicals than adults because (1) their biological functions are still developing and (2) their size and behavior may expose them to proportionately higher doses. In our recommendations, we indicate that EPA should prioritize data gaps, and we believe that the lack of data on children should be a priority. Regarding our recommendation that EPA should determine the extent of its legal authority to obtain biomonitoring data, EPA commented that a case-by-case explanation of its authority might be more useful than a global assessment of that authority. However, we continue to believe that an analysis of EPA’s legal authority to obtain biomonitoring data is critical. Fuller consideration of EPA’s authority is a necessary precondition of the two other recommendations that we make in this report, with which the agency agreed. That is, EPA would be best equipped to formulate a biomonitoring research strategy and contribute to an interagency task force if it were more fully aware of what data it can obtain. Furthermore, while we understand that EPA can clarify its authority to obtain biomonitoring data in individual regulatory actions, few such opportunities have arisen with regard to biomonitoring so far, and EPA provided no information suggesting it will have more opportunities to consider the issue in the near future. In addition, companies must sometimes submit chemical information independent of an EPA rule requiring submission of the data. For example, under section 8(e), chemical companies must submit certain adverse health and safety information at their own initiative. Such situations do not provide EPA with an initial opportunity to clarify its authority to obtain biomonitoring data. We continue to believe that formal written guidance would be useful in these circumstances. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to other appropriate congressional committees, the EPA Administrator, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. To determine the extent to which the Environmental Protection Agency (EPA) incorporates data from human biomonitoring studies into its assessments of risks from chemicals, we reviewed relevant laws, agency policies and guidance, and our prior reports relevant to EPA’s assessment of chemicals and to EPA’s activities related to children’s health issues. In addition, we reviewed EPA’s prior and planned uses of these data, academic publications, National Academy of Sciences reports, and government and industry-sponsored conference proceedings to gain an understanding of the current state of biomonitoring research. We supplemented this information with that obtained from interviews with EPA officials working on biomonitoring and risk assessment issues in the Office of Research and Development, the Office of Children’s Health Protection, the Office of Water, the Office of Air and Radiation, the Office of Pesticide Programs, and the Office of Pollution Prevention and Toxics. To review how EPA addresses challenges that limit the usefulness of biomonitoring data for risk assessment and management activities, we collected documentation on EPA’s biomonitoring-related research efforts, including EPA’s Human Health Research Strategy, and financial and program data for grant programs that have funded biomonitoring research. In addition, we interviewed stakeholders—such as the Centers for Disease Control and Prevention (CDC) and the Children’s Health Protection Advisory Committee as well as the American Chemistry Council, the Environmental Defense Fund, and the Environmental Working Group—to gauge EPA’s involvement with a variety of stakeholders working to further biomonitoring research. To determine the extent to which EPA has the authority to obtain biomonitoring data from the chemical industry, we reviewed relevant legislation and prior legal actions, and interviewed officials from EPA’s Office of General Counsel to understand EPA’s authorities for collecting biomonitoring data from companies. We conducted this performance audit from October 2007 to April 2009, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. EPA has programs intended to increase its knowledge of the toxic effects and levels of human exposure to certain chemicals, such as the agency’s Inventory Update Reporting (IUR) rule and voluntary programs, such as the Voluntary Children’s Chemical Evaluation Program (VCCEP) and the High Production Volume Challenge Program (HPV Challenge Program). However, these programs have significant limitations and no clear link to biomonitoring. For example, EPA’s IUR rule is intended to gather more information on how existing chemicals are used and how they come into contact with people. However, the agency does not collect biomonitoring data as part of this program. Furthermore, in 2003 and 2005, EPA amended the rule in ways that may reduce the amount of certain information that companies report about chemicals they produce. Although the 2003 amendments added inorganic chemicals to the substances for which companies were required to report and required other potentially useful information, the agency also raised the reporting threshold. This threshold is the level of production above which a company must provide data on a chemical to EPA. The agency increased the threshold from 10,000 pounds at a single site to 25,000 pounds, which may reduce the number of chemicals for which companies provide production data to EPA. In 2005, the agency also reduced the frequency with which chemical companies must report their production volume of chemicals. Before 2005, companies were required to report the production volume every 4 years for a chemical that met the reporting threshold in the 4th year. In 2003, the agency changed the reporting requirement so that companies have to report every 5 years, thus reducing the availability of production volume data. As with the earlier rule, companies are only required to report data for a single year, not for any of the years prior to the reporting year. However, EPA officials are considering ways to collect additional production volume information, such as requiring companies to report production volume for each of the 5 years whenever a company meets the reporting requirement of 25,000 pounds of production for the 5th year. EPA did require chemical companies to report some new information when it made these changes in 2003. Companies must now supply additional information relating to the manufacture of the reported chemicals, such as the number of workers reasonably likely to be exposed to the chemical, and relating to the physical form and maximum concentration of the chemical. In addition, for those chemicals produced in quantities of 300,000 pounds or more at one site, companies must now report “readily obtainable” information on how the chemicals are processed or used in industrial, commercial, or consumer settings, including whether such chemicals will be found in or on products intended for children. However, the definition of “readily obtainable” excludes information that requires extensive file searches or surveys of the manufacturers that purchase the chemicals. Furthermore, an industry representative told us that it is often difficult for chemical companies to determine whether a chemical they produce will eventually be used in a product intended for children, since the companies do not directly sell children’s products and may not know how manufacturers will use their product. Therefore, it is unclear whether EPA will receive significant information as a result of this new reporting requirement. EPA has also attempted to collect data on toxicity and human exposure using voluntary programs. For example, in 2000 the agency launched VCCEP to ensure that it had adequate information to assess the potential risks to children posed by certain chemicals. EPA asked companies that produce or import 23 specific chemicals to volunteer to “sponsor” their chemical by making certain data on the chemical’s toxicity available to the public. The companies volunteered to sponsor 20 of the 23 chemicals. However, VCCEP has proceeded slowly and has not provided EPA with the data needed to interpret biomonitoring research. Of the 23 VCCEP chemicals, EPA has received what it deems to be sufficient data for only 6 chemicals. In addition, it has asked for additional data that some of the sponsors declined to provide. For example, one sponsor declined to conduct additional reproductive toxicity testing for 2 chemicals, which EPA needed to use biomonitoring data in exposure assessments. Several environmental and children’s health groups, including EPA’s Children’s Health Protection Advisory Committee, have stated that VCCEP has not met its goal of ensuring that there are adequate publicly available data to assess children’s health risks from exposure to toxic commercial chemicals. Specifically, the groups have noted the lack of risk-based prioritization for collecting chemical data; the lack of specific guidance and criteria for the sponsor-developed studies and data; inadequate involvement of stakeholders; and problems with accountability, credibility, and data transparency. In 2008, EPA requested public comments on the VCCEP program and held a listening session. Nonetheless, EPA is still considering what further actions to take and has not set a goal for when it will complete its review of the program. In another voluntary program, begun in 1998, EPA attempted to collect certain information on the health and environmental effects of high production volume (HPV) chemicals, which are those manufactured or imported in amounts of at least 1 million pounds per year. Approximately 3,000 chemicals meet this criterion. Before the start of the program, EPA found that data on basic toxicity were available for only 57 percent of these chemicals, and that the full set of six basic chemical safety tests (i.e., acute toxicity, chronic toxicity, reproductive toxicity, mutagenicity, ecotoxity, and environmental fate) were available for only 7 percent. This information is necessary for EPA to conduct even a preliminary screening- level assessment of the hazards and risks of these chemicals, and for it to interpret any relevant biomonitoring data. Through the HPV Challenge Program, EPA asked chemical manufacturers and importers to voluntarily sponsor chemicals by submitting information on the chemicals’ physical properties, environmental fate, and health and environmental effects. The agency also asked companies to propose a strategy to fill data gaps. However, the HPV Challenge Program has serious limitations. First, EPA has been slow to evaluate chemical risks. More than a decade after starting the program, the agency has completed “risk-based prioritizations” for only 151 of the more than 3,000 HPV chemicals. Risk-based prioritizations are preliminary evaluations that summarize basic hazard and exposure information known to EPA. The agency intends to use these evaluations to assign priorities for future action on the basis of the risks presented by these chemicals. Second, data on almost 300 HPV chemicals are lacking because they were not sponsored by any chemical company— these unsponsored chemicals are referred to as “orphans.” The exact number of HPV orphan chemicals changes over time, with changes in sponsorship and production. EPA can require companies that manufacture or process orphan chemicals to conduct tests, but it has done so for only 16 of these almost 300 chemicals. This is largely because it is difficult to make certain findings regarding hazard or exposure, which section 4 of TSCA requires before EPA may issue a “test rule.” However, EPA did issue a second proposed HPV test rule in July 2008 for 19 additional chemicals and anticipates proposing a third test rule in 2009 for approximately 30 chemicals. Third, the HPV Challenge Program does not include inorganic chemicals, or the approximately 500 emerging chemicals that reached the HPV production threshold after 1994. EPA recently introduced a proposal for an inorganic HPV program, but officials did not provide us with a date regarding when they expect to launch this program. Finally, EPA allowed chemical companies to group the chemicals they sponsored into categories and to apply testing data from only a handful of the chemicals to the entire category. Some environmental advocacy organizations have claimed that such categories will not adequately identify the hazards of all the chemicals in the category. Despite the limitations of the available data on toxicity and exposure, EPA plans by 2012 to conduct a basic screening-level assessment of the potential risks of more than 6,200 chemicals and to prioritize these chemicals for possible future action as the first step in its new Chemical Assessment and Management Program. EPA intends to apply the information on chemical hazards obtained from the HPV Challenge Program, among other programs, and extend its efforts to cover moderate production volume chemicals—those produced or imported in quantities of more than 25,000 and less than 1 million pounds per year. EPA plans to use any available biomonitoring data to help prioritize the chemicals for further review but does not have a formal plan for doing so. Although EPA has occasionally used biomonitoring in connection with these voluntary programs, it is not attempting to use these programs as a means to make biomonitoring data more useful. To do so, the agency would not only have to collect data more effectively from companies, but also collect the specific kinds of data that would allow it to understand the human health implications of biomonitoring data. In addition to the contact named above, Ed Kratzer, Assistant Director; Elizabeth Beardsley; David Bennett; Antoinette Capaccio; Crystal Huggins; Karen Keegan; Ben Shouse; and Peter Singer also made important contributions to this report. | Biomonitoring, which measures chemicals in people's tissues or body fluids, has shown that the U.S. population is widely exposed to chemicals used in everyday products. Some of these have the potential to cause cancer or birth defects. Moreover, children may be more vulnerable to harm from these chemicals than adults. The Environmental Protection Agency (EPA) is authorized under the Toxic Substances Control Act (TSCA) to control chemicals that pose unreasonable health risks. GAO was asked to review the (1) extent to which EPA incorporates information from biomonitoring studies into its assessments of chemicals, (2) steps that EPA has taken to improve the usefulness of biomonitoring data, and (3) extent to which EPA has the authority under TSCA to require chemical companies to develop and submit biomonitoring data to EPA. EPA has made limited use of biomonitoring data in its assessments of risks posed by commercial chemicals. One reason is that biomonitoring data relevant to the entire U.S. population exist for only 148 of the over 6,000 chemicals EPA considers the most likely sources of human or environmental exposure. In addition, biomonitoring data alone indicate only that a person was somehow exposed to a chemical, not the source of the exposure or its effect on the person's health. For most of the chemicals studied under current biomonitoring programs, more data on chemical effects are needed to understand if the levels measured in people pose a health concern, but EPA's ability to require chemical companies to develop such data is limited. Thus, the agency has made few changes to its chemical risk assessments or safeguards in response to the recent increase in available biomonitoring data. While EPA has initiated several research programs to make biomonitoring more useful to its risk assessment process, it has not developed a comprehensive strategy for this research that takes into account its own research efforts and those of the multiple federal agencies and other organizations involved in biomonitoring research. EPA does have several important biomonitoring research efforts, including research into the relationships between exposure to harmful chemicals, the resulting concentration of those chemicals in human tissue, and the corresponding health effects. However, without a plan to coordinate its research efforts, EPA has no means to track progress or assess the resources needed specifically for biomonitoring research. Furthermore, according to the National Academy of Sciences, the lack of a coordinated national research strategy has allowed widespread chemical exposures to go undetected, such as exposures to flame retardants. The development of such a strategy could enhance biomonitoring research and link data needs with collection efforts. EPA has not determined the extent of its authority to obtain biomonitoring data under TSCA, and this authority is untested and may be limited. The TSCA provision that authorizes EPA to require companies to develop data focuses on the health and environmental effects of chemicals. Since biomonitoring data alone may not demonstrate the effects of a chemical, EPA may face difficulty in using this authority to obtain biomonitoring data. It may be easier for EPA to obtain biomonitoring data under other TSCA provisions, which allow EPA to collect existing information on chemicals. For example, TSCA obligates chemical companies to report information that reasonably supports the conclusion that a chemical presents a substantial risk of injury to health or the environment. EPA asserts that biomonitoring data are reportable if the chemical in question is known to have serious toxic effects and biomonitoring information indicates a level of exposure previously unknown to EPA. EPA took action against a chemical company under this authority in 2004. However, the action was settled without an admission of liability by the View GAO-09-353 or key components. company, so EPA's authority to obtain biomonitoring data remains untested. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The public faces the risk that critical services could be severely disrupted by the Year 2000 computing crisis. Financial transactions could be delayed, airline flights grounded, and national defense affected. The many interdependencies that exist among the levels of governments and within key economic sectors of our nation could cause a single failure to have wide-ranging repercussions. While managers in the government and the private sector are acting to mitigate these risks, a significant amount of work remains. The federal government is extremely vulnerable to the Year 2000 issue due to its widespread dependence on computer systems to process financial transactions, deliver vital public services, and carry out its operations. This challenge is made more difficult by the age and poor documentation of many of the government’s existing systems and its lackluster track record in modernizing systems to deliver expected improvements and meet promised deadlines. Year 2000-related problems have already occurred. For example, an automated Defense Logistics Agency system erroneously deactivated 90,000 inventoried items as the result of an incorrect date calculation. According to the agency, if the problem had not been corrected (which took 400 work hours), the impact would have seriously hampered its mission to deliver materiel in a timely manner. Our reviews of federal agency Year 2000 programs have found uneven progress, and our reports contain numerous recommendations, which the agencies have almost universally agreed to implement. Among them are the need to establish priorities, solidify data exchange agreements, and develop contingency plans. One of the largest, and largely unknown, risks relates to the global nature of the problem. With the advent of electronic communication and international commerce, the United States and the rest of the world have become critically dependent on computers. However, with this electronic dependence and massive exchanging of data comes increasing risk that uncorrected Year 2000 problems in other countries will adversely affect the United States. And there are indications of Year 2000 readiness problems internationally. In September 1997, the Gartner Group, a private research firm acknowledged for its expertise in Year 2000 computing issues, surveyed 2,400 companies in 17 countries and concluded that “hirty percent of all companies have not started dealing with the year 2000 problem.” As 2000 approaches, the scope of the risks that the century change could bring has become more clear, and the federal government’s actions have intensified. This past February, an executive order was issued establishing the President’s Council on Year 2000 Conversion. The Council Chair is to oversee federal agency Year 2000 efforts as well as be the spokesman in national and international forums, coordinate with state and local governments, promote appropriate federal roles with respect to private-sector activities, and report to the President on a quarterly basis. As we testified last month, there are a number of actions we believe the Council must take to avert this crisis. We plan to issue a report later this month detailing our specific recommendations. The following summarizes a few of the key areas in which we will be recommending action. Because departments and agencies have taken longer than recommended to assess the readiness of their systems, it is unlikely that they will be able to renovate and fully test all mission-critical systems by January 1, 2000. Consequently, setting priorities is essential, with the focus being on systems most critical to our health and safety, financial well being, national security, or the economy. Agencies must start business continuity and contingency planning now to safeguard their ability to deliver a minimum acceptable level of services in the event of Year 2000-induced failures. Last month, we issued an exposure draft of a guide providing information on business continuity and contingency planning issues common to most large enterprises.Agencies developing such plans only for systems currently behind schedule, however, are not addressing the need to ensure business continuity in the event of unforeseen failures. Further, such plans should not be limited to the risks posed by the Year 2000-induced failures of internal information systems, but must include the potential Year 2000 failures of others, including business partners and infrastructure service providers. The Office of Management and Budget’s (OMB) assessment of the current status of federal Year 2000 progress is predominantly based on agency reports that have not been consistently verified or independently reviewed. Without such independent reviews, OMB and the President’s Council on Year 2000 Conversion have little assurance that they are receiving accurate information. Accordingly, agencies must have independent verification strategies involving inspectors general or other independent organizations. As a nation, we do not know where we stand with regard to Year 2000 risks and readiness. No nationwide assessment—including the private and public sectors—has been undertaken to gauge this. In partnership with the private sector and state and local governments, the President’s Council could orchestrate such an assessment. Ensuring that information systems are made Year 2000 compliant is an enormous, difficult, and time-consuming challenge for a large organization such as the Department of the Interior. Interior’s systems support a wide range of programs; unless they can function into the next century, the department is at risk of being unable to effectively or efficiently carry out its critical missions. As the nation’s principal conservation agency, Interior has responsibility for managing most of our nationally owned public lands and natural resources, protecting our fish and wildlife, and preserving the environmental and cultural values of our national parks and historic places. The department’s core business processes could fail—in whole or in part—if supporting information systems are not made Year 2000 compliant in time. These include systems that account for and disburse mineral royalties of about $300 million each support the management of the nation’s lands and mineral resources, account for and maintain records on over $2.5 billion of American Indian trust fund assets, and detect and analyze ground motion and provide early warnings of earthquakes. A detailed example of this kind of risk can be seen in recent work we performed for the House Committee on Appropriations, Subcommittee on Interior and Related Agencies, where we concluded that recent and potential future delays in the Bureau of Land Management’s (BLM) Automated Land and Mineral Record System (ALMRS) introduce the risk that BLM will lose information systems support for some core business processes. Two systems that ALMRS is scheduled to replace, the Case Recordation System and the Mining Claim Recordation System, are currently not Year 2000 compliant. BLM uses these two systems to create and manage land and mineral case files. They capture and provide information on case type, customer, authorizations, and legal descriptions. Without these systems, BLM cannot create and record new cases, such as mining claims, or update case information. Delays in implementing ALMRS introduce the risk that BLM will be forced to continue using these two systems beyond 2000. To mitigate this risk, BLM has begun planning to ensure that these two systems can run in 2000 and beyond, if necessary. BLM has not yet, however, completed its assessment to determine what specific actions are needed to accomplish this, nor has it developed a contingency plan to ensure the continuity of core business processes in the event that ALMRS is not fully deployed by 2000. In a draft report to be released soon, we are recommending that BLM assess the systems to be replaced by ALMRS to determine what actions are needed to ensure their continued use after January 1, 2000, and develop a contingency plan should ALMRS not be fully and successfully deployed in time. Interior officials have stated that they recognize the importance of ensuring that their systems are Year 2000 compliant. The Secretary has said that identifying and correcting Year 2000 computer problems is a priority, and the former Chief Information Officer called this challenge one of the most serious operational and administrative problems the department has ever faced. In assessing the magnitude of the problem, the department’s bureaus and offices identified 95 mission-critical systems,with a total of about 18 million lines of software code, all of which must be examined. Interior estimates that correcting these 95 systems will cost $17.3 million, as shown in the following table. In addition to these systems, the department is also assessing its communications systems and embedded computer chip technologies to determine whether they will be affected by the coming century change. Embedded systems are special-purpose computers built into other devices. Many facilities used by the federal government that were built or renovated within the last 20 years contain embedded computer systems to control, monitor, or assist in operations. If the embedded chips used in such devices contain two-digit date fields for year representation, the devices could malfunction. For example, control systems that regulate water flow and generators in our nation’s dams, which produce over 42 billion kilowatts of energy each year, could fail. Interior’s Year 2000 program operates in a decentralized fashion as its bureaus and offices are responsible for identifying and assessing their mission-critical systems, determining correction priorities, and making their own mission-critical systems Year 2000 compliant. Departmental oversight is provided by Interior’s Year 2000 Project Office. This office reports directly to the Chief Information Officer. The Year 2000 Project Team consists of a Year 2000 coordinator from the department and a representative located in each bureau or office. The bureaus and offices maintain information used to manage their Year 2000 activities. Bureau and office representatives submit monthly milestone and status information to the coordinator, which he analyzes and compiles manually. The coordinator tracks major milestones, such as systems assessments completed, Year 2000 renovations completed, and systems implemented. The information is forwarded to the Chief Information Officer and, each quarter, to OMB. According to Interior’s Year 2000 coordinator, he tracks the 95 mission-critical systems and maintains status information in a word processing table that lacks the capability for automated tracking or analysis. He stated that he notifies the Chief Information Officer of any reported milestone delays, which are then discussed at senior-level management meetings. Table 2 shows the status of the 67 mission-critical systems that are being renovated, as reported to OMB on February 15, 1998. (This table does not include the other 28 mission-critical systems, which are considered already compliant or are being replaced.) Accurate reporting is critical to ensuring that executive management receives a reliable picture of the Year 2000 progress of component organizations. This is particularly important at Interior, where much of the Year 2000 program responsibility is delegated to the individual bureaus and offices. Although the department relies on its bureaus to provide monthly reports on the status of their Year 2000 renovation actions, to date it has not verified the accuracy and reliability of the reported information. As the only staff member in Interior’s Year 2000 Project Office, the department’s coordinator does not have the ability to verify the accuracy of reported information on the bureaus’ and offices’ mission-critical systems. Therefore, the Chief Information Officer requested that Interior’s Inspector General assist in monitoring the progress of the individual bureaus in achieving Year 2000 compliance. It is important to verify because if the data are inaccurate, it will be more difficult to identify and correct problems promptly. Interior regularly exchanges data with other organizations. In many instances, these data are critical to the department’s operations. In response to a recent survey we conducted, Interior reported that 40 of its 95 mission-critical systems exchange electronic data with other federal, state, and local agencies; domestic and foreign private sectors; and foreign governments. Although the bureaus have identified over 2,900 incoming and outgoing external data exchanges, the department does not have a central inventory. While it has asked each bureau and office head to certify that date-sensitive data exchanges have been identified and data exchange partners contacted to begin resolving date-format issues, the lack of a centralized inventory and an automated way to maintain it means that Interior could be missing key information showing whether exchange agreements are proceeding as scheduled. Failure to reach such agreements raises the risk that Interior’s systems will receive noncompliant data that can corrupt its databases. The risk of failure is not limited to an organization’s internal information systems, but includes the potential Year 2000 failures of others, such as business partners. One weak link in the chain of critical dependencies and even the most successful Year 2000 program will fail to protect against major disruption of business operations. Because of these risks, agencies must start business continuity and contingency planning now in order to reduce the risk of Year 2000-induced business failures. Interior has recognized, to some degree, the critical need for contingency planning, and has asked its bureaus and offices to develop such plans for all mission-critical systems that are behind schedule. However, it has not instructed its component organizations to develop plans to ensure the continuity of core business operations. As noted, agencies developing such plans only for systems currently behind schedule are not addressing the need to ensure business continuity in the event of unforeseen failures. Further, such plans should not be limited to the risks posed by Year 2000-induced failures of internal information systems. In conclusion, the change of century will initially present many difficult challenges in information technology and continuity of business operations, and has the potential to cause serious disruption to the nation and to the Department of the Interior. These risks can be mitigated and disruptions minimized with proper attention and management. While Interior has been working to mitigate its Year 2000 risks, further action must be taken to avoid losing the ability to continue mission-critical business operations. Continued congressional oversight through hearings such as this can help ensure that such attention continues and that appropriate actions are taken to address this crisis. Mr. Chairman, this concludes my statement. I would be happy to respond to any questions that you or other members of the Committee may have at this time. Year 2000 Computing Crisis: Business Continuity and Contingency Planning (GAO/AIMD-10.1.19, Exposure Draft, March 1998). Year 2000 Computing Crisis: Strong Leadership Needed to Avoid Disruption of Essential Services (GAO/T-AIMD-98-117, March 24, 1998). Year 2000 Computing Crisis: Office of Thrift Supervision’s Efforts to Ensure Thrift Systems Are Year 2000 Compliant (GAO/T-AIMD-98-102, March 18, 1998). Year 2000 Computing Crisis: Strong Leadership and Effective Public/Private Cooperation Needed to Avoid Major Disruptions (GAO/T-AIMD-98-101, March 18, 1998). Post-Hearing Questions on the Federal Deposit Insurance Corporation’s Year 2000 (Y2K) Preparedness (AIMD-98-108R, March 18, 1998). SEC Year 2000 Report: Future Reports Could Provide More Detailed Information (GAO/GGD/AIMD-98-51, March 6, 1998). Year 2000 Readiness: NRC’s Proposed Approach Regarding Nuclear Powerplants (GAO/AIMD-98-90R, March 6, 1998). Year 2000 Computing Crisis: Federal Deposit Insurance Corporation’s Efforts to Ensure Bank Systems Are Year 2000 Compliant (GAO/T-AIMD-98-73, February 10, 1998). Year 2000 Computing Crisis: FAA Must Act Quickly to Prevent Systems Failures (GAO/T-AIMD-98-63, February 4, 1998). FAA Computer Systems: Limited Progress on Year 2000 Issue Increases Risk Dramatically (GAO/AIMD-98-45, January 30, 1998). Defense Computers: Air Force Needs to Strengthen Year 2000 Oversight (GAO/AIMD-98-35, January 16, 1998). Year 2000 Computing Crisis: Actions Needed to Address Credit Union Systems’ Year 2000 Problem (GAO/AIMD-98-48, January 7, 1998). Veterans Health Administration Facility Systems: Some Progress Made In Ensuring Year 2000 Compliance, But Challenges Remain (GAO/AIMD-98-31R, November 7, 1997). Year 2000 Computing Crisis: National Credit Union Administration’s Efforts to Ensure Credit Union Systems Are Year 2000 Compliant (GAO/T-AIMD-98-20, October 22, 1997). Social Security Administration: Significant Progress Made in Year 2000 Effort, But Key Risks Remain (GAO/AIMD-98-6, October 22, 1997). Defense Computers: Technical Support Is Key to Naval Supply Year 2000 Success (GAO/AIMD-98-7R, October 21, 1997). Defense Computers: LSSC Needs to Confront Significant Year 2000 Issues (GAO/AIMD-97-149, September 26, 1997). Veterans Affairs Computer Systems: Action Underway Yet Much Work Remains To Resolve Year 2000 Crisis (GAO/T-AIMD-97-174, September 25, 1997). Year 2000 Computing Crisis: Success Depends Upon Strong Management and Structured Approach (GAO/T-AIMD-97-173, September 25, 1997). Year 2000 Computing Crisis: An Assessment Guide (GAO/AIMD-10.1.14, September 1997). Defense Computers: SSG Needs to Sustain Year 2000 Progress (GAO/AIMD-97-120R, August 19, 1997). Defense Computers: Improvements to DOD Systems Inventory Needed for Year 2000 Effort (GAO/AIMD-97-112, August 13, 1997). Defense Computers: Issues Confronting DLA in Addressing Year 2000 Problems (GAO/AIMD-97-106, August 12, 1997). Defense Computers: DFAS Faces Challenges in Solving the Year 2000 Problem (GAO/AIMD-97-117, August 11, 1997). Year 2000 Computing Crisis: Time is Running Out for Federal Agencies to Prepare for the New Millennium (GAO/T-AIMD-97-129, July 10, 1997). Veterans Benefits Computer Systems: Uninterrupted Delivery of Benefits Depends on Timely Correction of Year-2000 Problems (GAO/T-AIMD-97-114, June 26, 1997). Veterans Benefits Computers Systems: Risks of VBA’s Year-2000 Efforts (GAO/AIMD-97-79, May 30, 1997). Medicare Transaction System: Success Depends Upon Correcting Critical Managerial and Technical Weaknesses (GAO/AIMD-97-78, May 16, 1997). Medicare Transaction System: Serious Managerial and Technical Weaknesses Threaten Modernization (GAO/T-AIMD-97-91, May 16, 1997). Year 2000 Computing Crisis: Risk of Serious Disruption to Essential Government Functions Calls for Agency Action Now (GAO/T-AIMD-97-52, February 27, 1997). Year 2000 Computing Crisis: Strong Leadership Today Needed To Prevent Future Disruption of Government Services (GAO/T-AIMD-97-51, February 24, 1997). High-Risk Series: Information Management and Technology (GAO/HR-97-9, February 1997). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | GAO discussed where the federal government stands in its efforts to lessen Year 2000 risks and GAO's preliminary observations on Year 2000 activities at the Department of the Interior. GAO noted that: (1) the federal government is extremely vulnerable to the Year 2000 issue due to its widespread dependence on computer systems; (2) its reviews of federal agency Year 2000 programs have found uneven progress, and its reports contain numerous recommendations, which the agencies have almost universally agreed to implement; (3) one of the largest, and largely unknown, risks relates to the global nature of the Year 2000 problem; (4) with electronic dependence and massive exchange of data comes increasing risk that uncorrected Year 2000 problems in other countries will adversely affect the United States; (5) setting priorities for Year 2000 conversion is essential, with the focus being on systems most critical to health and safety, financial well being, national security, or the economy; (6) agencies must start business continuity and contingency planning now to safeguard their ability to deliver a minimum acceptable level of services in the event of Year 2000-induced failures; (7) agencies must have strategies for independently verifying the status of their Year 2000 efforts; (8) no nationwide assessment, including the private and public sectors, has been undertaken of Year 2000 risks and readiness; (9) Interior estimates that correcting its 95 mission-critical systems will cost $17.3 million; (10) Interior is also assessing its communications systems and embedded chip technologies to determine whether they will be affected by the century change; and (11) Interior's Year 2000 coordinator does not have the ability to verify the accuracy of reported information on the bureaus' and offices' mission-critical systems. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The Army Corps of Engineers (the “Corps”) and the Department of the Interior’s Bureau of Reclamation (the “Bureau”) operate about 130 hydropower plants at dams throughout the nation. These plants generate electricity from the flow of water that is also used for other purposes, including fish and wildlife enhancement, flood control, irrigation, navigation, recreation, and water supply. Since about the 1930s, electricity that is generated by these hydropower plants has played an important role in electricity markets. These plants were a key element in electrifying rural and sparsely populated areas of the nation. These plants account for over 35,000 megawatts (MW) of generating capacity (or about 5 percent of the nation’s total electric supply) in 1998. The Department of Energy’s power marketing administrations (PMA) generally market the electricity generated at these plants to wholesale customers (the “power customers”), such as rural electric cooperatives and municipal utilities, that in turn sell the electricity to retail customers. (Fig. 1.1 shows the service areas of the PMAs.) Revenues earned from the sale of this electricity totaled over $3 billion in fiscal year 1997. These revenues pay for the operation and maintenance of the government’s electricity-related assets and repay a portion of the outstanding federal appropriated and other debt of about $22 billion for the Bureau’s and the Corps’ power plants, related PMA transmission lines, and well as certain related federal investments for irrigation, water supply, and other facilities that are to be repaid over time from electricity revenues. The revenues also pay interest on the outstanding appropriated debt, where applicable. In traditional markets, electric utilities enjoyed relative certainty about the amount of demand they would have to satisfy in the future. A compact existed between utilities and state public utility commissions. Utilities were obligated to serve all existing and future customers in their pre-established service areas. In return, utilities were granted monopolies within their service areas and approved rate schedules that guaranteed stated earnings on their operating costs and investments. They forecasted the load they would serve by using econometric and end-use analyses models over future periods of time that were as long as 20 years. They collected sufficient funds in their electric rates to pay for needed generating capacity and to operate, maintain, and repair existing power plants and other electricity assets. The funds collected through rates also include profits. However, the nation’s electricity markets are undergoing significant changes. The Energy Policy Act of 1992 significantly increased competition in wholesale electricity markets. In addition, competition at the retail level is now arriving. According to the Department of Energy’s Energy Information Administration, as of March 1999, 18 states had acted—by legislation that had been enacted (14 states) or by regulatory order (4 states) —to restructure electricity markets. Regulators in these states expected that industrial, commercial, and, ultimately, residential consumers would be able to choose their electricity supplier from among several competitors, rather than being tied to one utility. As competition increases, the rates paid by consumers for electricity have dropped and should continue to do so. For example, according to the Energy Information Administration, as a result of such factors as emerging competition and new, more efficient generating technologies, retail electricity rates decreased by about 25 percent from 1982 through 1996, after factoring in the impact of inflation. The administration expects electricity rates to continue to decrease in real terms by 6 percent to 19 percent by 2015. In recent years, uncertainty about the pace and extent of competitiveness in electric markets has caused utilities to be more flexible. Utilities have relied more on purchasing electricity from other sources or acquiring new power plants, such as smaller natural-gas-fired plants, that are less expensive and more flexible for meeting shifting demand. They have also cut costs by reorganizing and reducing staff, and they have consolidated or merged with other utilities where they believed it was appropriate. For example, after years of virtually no mergers, from October 1992 to January 1998, investor-owned utilities had proposed over 40 mergers and completed 17 of them, according to the Edison Electric Institute. In addition, according to utility officials, some utilities are retiring or divesting some high-cost power plants, while others are buying those same plants to serve a niche in their resource portfolios. According to utility officials, in more stable electricity markets, utilities and federal agencies maintained and repaired their hydroelectric and other power plants according to a schedule that was predetermined by the manufacturer’s specifications and the operating history of the plant. Maintenance and repairs were frequently made at this predetermined time whether or not they were needed. Because maintenance or repairs could have been performed later or less frequently, perhaps with lower costs, some Bureau and utility officials that we contacted characterized these practices as over-maintenance of the hydropower plants. These practices, according to an industry consultant, were seldom questioned partly because of the low costs and resiliency of hydropower plants—especially of those placed into service during the 1950s. However, as markets become more competitive, federal agency, utility, and electric industry officials have increasingly viewed hydropower plants as particularly useful to utilities’ overall operations. One of hydropower’s important traits is its flexibility in meeting different levels of demand. This characteristic, according to utility officials, means that hydropower plants will likely continue to play a significant role in meeting demand during peak periods and providing ancillary services, without which electricity systems cannot operate. Currently, utilities provide these services routinely. However, according to Bureau, PMA, and utility officials, depending upon actions taken by federal and state regulators in the near future, a separate market may develop for ancillary services. These services may be priced separately and may allow utilities with hydropower to capture a market niche and earn additional revenues. In response to new markets and perceptions about the role of hydropower in those markets, federal agencies and some utilities have reconsidered how they operate, maintain, and repair their hydropower plants. For example, some utilities have implemented less-expensive, more-flexible maintenance practices, which consider such factors as the generating size of a utility’s hydropower plants, those plants’ roles in the utility’s generation portfolio, and marketing and economic considerations. One such approach, called “Reliability Centered Maintenance,” is defined as a maintenance philosophy that attempts to make use of the most logical, cost-effective mix of breakdown maintenance, preventive maintenance, and predictive testing and proactive maintenance to attain the full life of the equipment, reduce maintenance costs, and encourage reliable operations. For example, according to some utilities we contacted, in determining when to maintain or repair equipment, they are relying increasingly on the use of monitoring equipment to detect changes in the operating conditions of the equipment, instead of performing those actions in a prescheduled manner, as in the past. On the basis of these examinations, the utility may decide to repair or replace the component. Alternatively, the utility may decide to stretch out the operation of the component to the point of near-failure. Some components may actually be run until they fail. However, according to Corps and utility officials, in the cases of some smaller hydropower units, installing monitoring equipment at a cost of $200 to $500 per unit may not make economic sense. Other measures may also be used to monitor the operating condition of equipment. For example, the Corps tests the lubricating oil to indicate the condition of its generating equipment. Also, in some cases, when deciding how and when to maintain and repair generating units, management now considers the plant or the unit as an individual cost center that must make a positive contribution to the utility’s bottom line. In such an environment, plant managers will become more aware of the production costs and will exert increased pressures to cut costs at the plant and at the corporate levels. Plant managers may become aware that a utility may actually shut down and sell a generating unit if operating or repairing it does not return a required, positive financial return. As market competition intensifies, utilities will face increasing pressures to operate as efficiently and cost-effectively as possible. Utilities’ management will need to know how well their plants are producing electricity in order to make informed decisions about how to allocate scarce dollars for maintaining and repairing power plants, where to cut costs, or, in more extreme cases, which generating units to sell or shut down. An important concept for defining power plants’ performance is the “reliability” with which plants generate electricity. Within the electric utility industry, power plants are viewed as “reliable” if they are capable of functioning without failure over a specific period of time or amount of usage. The availability factor and the related outages factors are widely accepted measures of the reliability of power plants. The time a generating unit is “available” to generate electricity is the time it is mechanically able to generate electricity because it is not malfunctioning unexpectedly or because it is not being maintained or repaired. For instance, if a unit were available to generate electricity 8,000 hours out of the 8,760 hours in a year, then its availability factor would be 8,000 hours divided by 8,760 hours, or about 91.3 percent. When a unit is unable to generate electricity because it is broken, being repaired, or being maintained, it is in outage status. Outages are further classified as “scheduled” outages if the unit is unable to generate electricity because it is undergoing previously scheduled repairs or maintenance. If a unit is unable to generate electricity because of an unexpected breakdown and/or if unanticipated repairs need to be performed, then it is in “forced outage” status. If a plant were in scheduled outage status for 100 hours over the course of one year, then its scheduled outage factor would be 100 hours divided by the 8,760 hours in a year, or 1.1 percent. If a plant were in a forced outage status for 600 hours, then its forced outage factor would be 600 hours divided by the 8,760 hours in the year, or 6.8 percent of the time. For any generating unit, the availability factor, the scheduled outage factor, and the forced outage factor, added together, should equal 100 percent because, taken together, they account for a plant’s entire operating status over a period of time. Assessing the performance of a hydropower plant or unit by examining its availability factor calls for understanding additional variables that would affect its performance. Many officials we contacted said that the availability factor needs to be understood in terms of such factors as the role played by the plant in terms of the kind of demand that it meets (for instance, whether it meets peak demand), the availability of water throughout the year, and the purposes satisfied by the dam and reservoir. For example, according to a utility consultant, because water is abundant at the New York Power Authority’s Niagara Power Project, the generating units are used primarily to satisfy nonpeak loads. Therefore, the utility attempts to operate and maintain those units to be on line as much as possible. To do otherwise entails a loss of generating revenues that could be earned almost 24 hours per day. Nevertheless, officials at every utility we contacted said that they achieved an availability of at least 90 percent, and the Bureau and the Corps have formal goals of attaining that availability level. As requested by the Chairman, Subcommittee on Water and Power, House Committee on Resources, we examined the (1) reliability of the Bureau’s and Corps’ hydropower plants in generating electricity compared with the reliability of nonfederal hydropower plants; (2) reasons why the Bureau’s and the Corps’ plants may be less reliable than nonfederal plants and the potential implications of reduced reliability; and (3) actions taken to obtain funding to better maintain and repair the Bureau’s and the Corps’ plants. To compare the generating reliability of the Bureau’s and the Corps’ hydropower plants with nonfederal ones, we obtained, analyzed, and contrasted power plants’ performance data, including availability and outages factors, from the Bureau, the Corps, and the North American Electric Reliability Council. We discussed the limitations of these performance indicators with officials from the Bureau, the Corps, the PMAs, the Tennessee Valley Authority, investor-owned utilities, publicly owned utilities, and other experts in the electric utility industry. To explore why federal hydropower plants sometimes performed at lower levels, we obtained and analyzed various reports on the subject and discussed the topic with representatives of the Bureau, the Corps, the PMAs, various PMA electricity customers or their associations, investor-owned utilities, and nonfederal, publicly owned utilities. Moreover, in addressing the implications of any reduced performance by federal plants, we interviewed industry experts, representatives of investor-owned and publicly owned utilities, officials of the PMAs, and the PMAs’ electricity customers. We also examined studies about the changes in electricity markets. In examining steps to secure funding to better maintain and repair the Bureau’s and the Corps’ plants, we studied the efforts of the Corps, the Bureau, and the PMAs to pay for the maintenance and repair of federal hydropower assets more quickly and with greater certainty. In this regard, we contacted the Bureau, the Corps, the PMAs, and the PMAs’ power customers at several different locations, including Denver, Colorado; Boise, Idaho; Portland, Oregon; and Sacramento, California. At these locations, we also examined any funding agreements concluded by these parties and asked detailed questions about the benefits and other implications of these agreements. Our analysis was based on the assumption that the Bureau’s and the Corps’ hydropower plants, the related facilities, and the PMAs would continue to exist under some form of federal ownership. In examining other steps to secure enhanced funding, we relied to the greatest extent possible upon previous work that we had performed on federal electricity, especially work performed during two prior reviews—Federal Power: Options for Selected Power Marketing Administrations: Role in a Changing Electricity Industry (GAO/RCED-98-43, Mar. 6, 1998) and Federal Power: Outages Reduce the Reliability of Hydroelectric Power Plants in the Southeast (GAO/T-RCED-96-180, July 25, 1996). Our work was performed at many different locations that included various power plants and offices of the Bureau, the Corps, Bonneville, Southeastern, Southwestern, and Western; investor-owned utilities; and publicly owned utilities. We also contacted national and regional industry trade associations. Our work was performed from July 1998 through February 1999 in accordance with generally accepted government auditing standards. Appendix I contains a more complete description of our objectives, scope, and methodology. Within the electric utility industry, power plants are viewed as “reliable” if they are capable of functioning without failure during a specific period of time or amount of usage. From 1993 through 1997, the reliability of the Bureau’s hydropower plants improved, while the Corps’ remained about the same. However, the Bureau’s and the Corps’ hydropower plants are generally less reliable in generating electricity than nonfederal plants.The Bureau’s and the Corps’ hydropower generating units have been in outage status more of the time for forced and scheduled outages. Importantly, the reliability of the Bureau’s and the Corps’ plants in the Pacific Northwest is generally below that of Bureau and Corps plants elsewhere and also below that of nonfederal plants in the region and elsewhere. The Bureau’s and the Corps’ plants in the region account for over half of these agencies’ total generating capacity and almost all of the power marketed by the Bonneville Power Administration (Bonneville)—the largest of the PMAs in terms of power sales. Nationwide, both the Bureau’s and the Corps’ generating units are less available to generate electricity than those of nonfederal utilities and providers; however, the Bureau’s availability factor has been improving, while the Corps’ remained about the same. (See fig. 2.1.) Generating units that have malfunctioned unexpectedly or are undergoing maintenance and repairs are not considered to be available. Generating units that are more available to generate electricity are considered to be more reliable. The availability factor is considered to be a key indicator of reliability, according to the Bureau. From 1993 through 1997, nonfederal hydropower generating units were available to generate electricity an average of 91.3 percent of the time. During that same period, the Bureau’s hydropower units were available an average of 83.3 percent of the time (or 8 percent less than the average for nonfederal units) and the Corps’ hydropower units were available an average of 88.8 percent of the time (or 2.5 percent less than nonfederal units). The availability factor for nonfederal units from 1993 through 1997 was relatively unchanged. The Bureau’s availability factor improved from 80.9 percent of the time in 1993 to 86.6 percent in 1997. The Bureau believes that one reason for its lesser availability factors is that more of its plants are located on pipelines, canals, and water diversion facilities in comparison with most nonfederal plants. The Corps’ availability factor was relatively unchanged—declining slightly from 89.6 percent in 1993 to 89.2 percent in 1997. Corps officials later provided us with data showing an availability factor of 89.5 percent in 1998. Also, the Bureau provided us with data showing an availability factor of 88.5 percent in 1998. If generating units are not available to generate electricity, they are said to be in “outage” status. Because the Bureau’s and the Corps’ generating units were less available to generate electricity than the rest of the industry, they also had higher outages factors. The longer or more frequent its outages, the less available a unit is to generate electricity. (See fig. 2.2.) From 1993 through 1997, the hydropower units of the Bureau were in outage status an average of 16.7 percent of the time, and the Corps’ units were in outage status an average of 11.2 percent of the time. In contrast, nonfederal units were in outage status an average of 8.7 percent of the time. From 1993 through 1997, the Corps’ total outage factor was relatively unchanged, whereas the Bureau’s decreased from 19.1 percent in 1993 to 13.4 percent in 1997. Nonfederal units’ total outages factors were relatively unchanged. Examining the types of outages that occur indicates why generating units were not in service. Along with the availability factor, the forced outage factor is a key indicator of decreasing reliability because it depicts that unexpected outages occurred, thus indicating inconsistent operations. According to the Bureau’s 1996 benchmarking study, the lower the forced outage factor, the more reliable the electricity is considered. From 1993 through 1997, the average forced outage factor for the Bureau was 2.3 percent and the Corps’ was 5.1 percent. The average forced outage factor for nonfederal hydropower units was 2.3 percent—the same as the Bureau’s but less than the Corps’. (See fig. 2.3.) However, it should be noted that the Corps’ forced outage factor declined—from almost 6 percent in 1995 to 4.5 percent in 1997. According to the latest data provided by the Corps, the agency’s forced outage factor declined even further to under 3.2 percent in 1998. According to a Corps official, this improvement is the result of the agency’s $500 million effort, implemented or identified for implementation from fiscal year 1993 through 2009, to rehabilitate its hydropower plants. Scheduled outages are, by definition, anticipated. Nevertheless, scheduled outages factors also reflect the amount of time that a generating unit was off-line and unable to provide a utility’s customers with electricity. According to the Bureau’s 1996 benchmarking study, the longer a scheduled outage, the less efficient the maintenance program. In our view, a more efficient maintenance program would have placed the generating unit into service faster, thereby enabling the utility to provide its customers with more service and hence possibly earn more revenues. In the case of scheduled outages, from 1993 through 1997, the Corps’ average scheduled outage factor was 6.3 percent and the Bureau’s was 14.4 percent. The average scheduled outage factor for nonfederal utilities was 6.4 percent. However, from 1993 through 1997 the Bureau’s scheduled outage rate showed an improvement—decreasing from 17.1 percent in 1993 to 11.3 percent in 1997—while the Corps’ and the industry’s trends in scheduled outages factors were relatively unchanged. (See fig. 2.4.) Taking longer scheduled outages at opportune times is a management decision that may be considered good business practice, even though such decisions decrease a generating unit’s availability to generate electricity. For example, the Bureau and some electric utilities extend scheduled outages to perform additional repairs during periods when the water is not available for generating electricity or the unit is not needed to meet demand. Also, labor costs are minimized by avoiding the payment of overtime wages. However, according to some Bureau, PMA, and utility officials, these practices may change as markets evolve. Hydropower units may need to be available to generate electricity more of the time in order for the utility to take advantage of new market opportunities. For example, supplying an ancillary service, such as providing reserve capacity, may allow a utility to earn added revenues while not actually generating electricity; however, the unit must be in operating condition (“available”) to generate electricity. The reliability of the Bureau’s and the Corps’ hydropower plants in Pacific Northwest is important to the overall reliability of the Bureau and the Corps. The generating units of those plants account for over half of the Bureau’s and the Corps’ total hydropower capacity. In addition, those plants provide almost all of the generating capacity from which Bonneville, the largest PMA, markets electricity. However, the reliability of the Bureau’s and the Corps’ plants in the Pacific Northwest was below that of nonfederal plants in the region. In addition, the reliability of the Bureau’s and Corps’ plants in the region was also generally below that of the Bureau’s and Corps’ plants elsewhere and below that of nonfederal plants in other regions. As shown in chapter 4, Bonneville, the Bureau, and the Corps are undertaking extensive upgrades and rehabilitations of the federal plants. These actions occurred, in part, as a result of the increased funding flexibility provided by the agreements under which Bonneville would directly pay for the operation, maintenance, and repair of these assets. The availability factor of the Bureau’s units improved over time. The availability of the Corps’ units was slightly below that of nonfederal plants, but it declined slightly from 1993 to 1997. However, the Corps’ units had a forced outage status over twice as high as that of nonfederal units in the region, indicating inconsistent plant performance, while the Bureau’s units had a scheduled outage factor that was almost three times that of nonfederal units. From 1993 through 1997, the Bureau’s units in the Pacific Northwest were available to generate power an average of about 78.7 percent of the time, and the Corps’ units were available an average of 85.4 percent of the time. In contrast, nonfederal hydropower units in the region were available an average of 89.7 percent of the time. The Bureau’s availability factor improved from a level of 74 percent in 1993 to 85 percent in 1997, and the Corps’ availability factor decreased from 87.9 percent in 1993 to 85.7 percent in 1997. In contrast, the availability factors of nonfederal units decreased slightly from 91.8 percent in 1993 to 90.3 percent in 1997. In the Pacific Northwest, from 1993 through 1997, the Bureau’s units were in outage status an average of 21.3 percent of the time, and the Corps’ units were in outage status an average of 15.3 percent of the time, compared with an average of 10.3 percent of the time for nonfederal units in the region. The Bureau’s outage factor decreased from about 26 percent in 1993 to 15 percent in 1997, while the Corps’ increased slightly from 12.1 percent in 1993 to 14.3 percent in 1997. The outage factor for regional nonfederal units increased from 8.2 percent in 1993 to 9.7 percent in 1997. The Corps’ units performed more inconsistently than nonfederal units because from 1993 through 1997, the Corps’ units had higher forced outages factors (an average of 6.4 percent) than the Bureau’s units (an average of 1.9 percent) and nonfederal units (an average of 3.1 percent). The Corps’ forced outage factor in 1994 was about 5 percent and increased to over 7 percent in 1995 and 1996, before declining to about 5.6 in 1997. In contrast, the Bureau’s forced outage factor was lower than the nonfederal producers’ but increased from 1.3 percent in 1993 to 1.9 percent in 1997. Nonfederal producers had a forced outage that increased from 1.5 percent in 1993 to 3.2 percent in 1997. According to the Corps’ Hydropower Coordinator, the higher forced outage factor for the Corps’ units in the region pertained to the operation of fish screens and other equipment designed to facilitate salmon migrations around the Corps’ units. This equipment breaks or needs to be maintained, causing decreases in availability. During fiscal year 1998, at the Corps’ McNary and Ice Harbor plants, forced outages related to fish passage equipment were 30 and 15 percent, respectively, of the total hours in which the plants experienced forced outages. However, from 1993 through 1997, the Bureau’s units had higher scheduled outages factors (an average of 19.4 percent) than both the Corps’ units (an average of 8.9 percent) and nonfederal units (an average of 7.2 percent). The Bureau’s scheduled outages factors were far higher than those of nonfederal parties but decreased from 24.7 percent in 1993 to 13.2 percent in 1997. The Corps’ scheduled outage factor decreased from 9.6 percent in 1994 to 8.8 percent in 1997. Nonfederal parties had a scheduled outage factor that increased from 6.7 percent in 1993 to 8.4 percent in 1994 before falling to 6.5 percent in 1997. The Bureau’s and the Corps’ plants were less reliable than nonfederal plants partly because, under the federal planning and budget cycle, they could not always obtain funding for maintenance and repairs when needed. We found that funding for repairs can take years to obtain and is uncertain. As a result, the agencies delay repairs and maintenance until funds become available. In addition, the Anti-Deficiency Act and other statutes require that federal agencies not enter into any contracts before appropriations become available, unless authorized by law. Such delays can lead to maintenance backlogs and to inconsistent, unreliable performance. The PMAs’ electricity generally is priced less than other electricity. However, because markets are becoming more competitive, the PMAs’ customers will have more suppliers from which they can buy electricity. In some power marketing systems—for example, Bonneville’s service area—competition during the mid-1990s allowed some customers to leave or buy some of their electricity from other sources, rather than continuing to buy from Bonneville. Reliability is a key aspect of providing marketable power. For example, according to Bonneville, in large hydropower systems, the PMAs’ ability to earn electricity revenues depends, in part, on the availability of hydropower generating units to generate power. In more competitive markets, the reliability of the federal electricity will have to be maintained or improved to maintain the competitiveness of federal electricity and thus help ensure that the federal government’s $22 billion appropriated and other debt will be repaid. In addition, the Congress, the Office of Management and Budget (OMB), and we have been working to help ensure that the purchase and maintenance of all assets and infrastructure have the highest and most efficient returns to the taxpayer and the government. The federal planning and budgeting process takes at least 2 full years and does not guarantee that funds will be available for a specific project. This affects the ways in which the Bureau and the Corps plan and pay for the maintenance and repair of their hydropower plants. The federal budgeting process is not very responsive in accommodating the maintenance and repair of those facilities—it can take as long as 2 to 3 years before a repair is funded, if it is funded at all. Specifically, the project and field locations of the Bureau and the Corps identify, estimate the costs of, and develop their budget requests, not only for hydropower, but also for their other facilities, including dams, navigation systems, irrigation systems, and recreational facilities. The funding needs of these various assets compete for the funding and repair of hydropower plants may be assigned lower priorities than other items. For example, officials of the Bureau’s office in Billings, Montana, described the budget process they expected to undergo to develop a budget for fiscal year 2000. The process began in August 1997, when the regional office received initial budget proposals from its area offices. During the ensuing months, the area offices; the region; the Bureau’s Denver office; the Bureau’s Washington State office; the Office of the Secretary of the Interior; and OMB reviewed, discussed, and revised the proposed area offices’ and regional office’s budgets, resulting in a consolidated budget for the Bureau and the Department of the Interior. Certainty about expected funding levels will not be obtained until sometime between February 1999, when OMB conveys the President’s budget to the Congress, and the enactment and approval of the Energy and Water Appropriations Act. The time that will elapse from August 1997, when the area offices began their budget processes, and October 1999 (the start of fiscal year 2000) totals 26 months. In addition, funding for the maintenance and repair of the Bureau’s and the Corps’ hydropower plants is uncertain. Agency officials and other policy makers, faced with limited and scarce resources, especially in times of limited budgets, make decisions about where and where not to spend funds. As shown in examples below, funding is not always delivered to maintain and repair hydropower plants, even if the need is demonstrated. According to documentation that the Bureau provided us with, in 1983, detailed inspections of the generating units at the Shasta, California, hydropower plant found that generating components were deteriorating. The Bureau advised one of its federal power customers that it would seek funds in fiscal year 1984 for the repairs. However, OMB did not approve the requests because the units were not “approaching a failure mode.” Later, in 1990, the Bureau issued invitations to bid for the repairs, which, upon receipt ranged from $9 million to $12 million. However, the project was dropped because the Bureau had budgeted only $6 million. In 1992, after an inspection to determine how far the deterioration had advanced, one generating unit’s operations were reduced. The inspectors also recommended repairing the other two units because the gains in generating capacity that would be achieved as a result of the repairs would enable Western to sell more electricity. To fund the repairs, the Bureau requested funds in its fiscal year 1993 budget request; however, according to the Bureau’s records, OMB eliminated the request. The Bureau’s Budget Review Committee recommended that the project not be included in the agency’s fiscal year 1994 budget request and that the Bureau’s regional office “make a concerted effort to find non-federal financing.” The Corps’ Northwestern Division in Portland, Oregon, has also experienced difficulties in funding needed repairs. For example, at the Corps’ hydropower plant at The Dalles, Oregon, direct funding by Bonneville allowed the Corps to accomplish maintenance that, according to Corps officials, in all likelihood would not have been funded because of the funding constraints in the federal budget process. Beginning in late 1993, the Corps began preparing an evaluation report that was submitted to headquarters to replace major plant components on 14 units that had exhibited many problems over the years but were kept in service through intensive maintenance. The Congress approved funding for the major rehabilitation as part of the Corps’ fiscal year 1997 appropriations. However, after 2 of the units were out of service for an extended time, Bonneville and the Corps entered into an agreement in January 1995 for Bonneville to pay for the rewinding of the generator at unit 9. In February 1996, the rewinding of unit 7 was added to the agreement. In addition, Bonneville, in March 1996, agreed to fund the replacement of the excitation systems for The Dalles’ units 15 through 22, which were not included in the major rehabilitation funded by appropriations. Delayed or uncertain funding leads to delays or postponements of needed maintenance and repairs. These delays or postponements can result in maintenance backlogs that can worsen over time. After funding requests are identified and screened, funding may not be made available until up to 3 years in the future. The Corps has estimated a total maintenance backlog of about $190 million for its power plants in Bonneville’s service territory. However, according to Bonneville and Corps officials, the extent to which critical repair items are part of the backlog is a matter yet to be determined. In addition, according to Bonneville and Corps officials, the role of the approximately $190 million estimate for purposes of planning and budgeting under Bonneville’s and the Corps’ funding agreements is subject to debate. The Corps’ Hydropower Coordinator noted that carrying a maintenance backlog is not a bad management practice in and of itself, as long as it can be managed through planning and budgeting techniques. In contrast with the Corps, Bureau officials maintain that they have a policy of not deferring maintenance and repairs they consider to be critical, although noncritical items may be deferred. They added that the Bureau is free to reprogram funds when needed to fund repairs and maintenance. However, we noted that unfunded maintenance requirements for the Bureau exist. In the Pacific Northwest, the Bureau has been able to address these needs by securing new funding sources. Specifically, Bonneville and the Bureau in the Pacific Northwest have signed an agreement under which Bonneville’s power revenues will directly pay for about $200 million of capital repairs at the Bureau’s power plants. According to Bureau officials, some of these repairs would likely not have been made under the existing federal planning and budgeting processes because of limited and declining federal budgetary resources. Therefore, it is doubtful that these maintenance needs could have been addressed in a timely manner without a new funding mechanism. Failure to fund and perform maintenance and repairs in a timely fashion can lead to frequent and/or extended outages. These outages force the PMAs or their customers to purchase more expensive power than the federal agencies provided in order to satisfy their contractual requirements. For example, from 1990 through 1992, two or more units of the Corps’ Carters hydropower plant, in Georgia, were out of service at the same time for periods ranging from about 3 months to almost 1 year. A Southeastern official estimated that its wholesale customers had purchased replacement electricity for about $15 million more than they would have paid for power marketed by Southeastern. In another example, Southeastern officials estimated that customers of its Georgia-Alabama-South Carolina system had paid 22 percent more in 1990 than in the previous year partly as a result of extended, unplanned outages. Other factors that led to the rate increase included a drought and increases in operation and maintenance costs at the Corps’ plants. In addition, as previously noted in our Shasta example, the Bureau restricted the operation of one of the plant’s generators in response to deteriorating operating conditions. Although the average nonfederal hydropower generating unit is older (48 years) than the Bureau’s (41 years) and the Corps’ (33 years), the nonfederal units’ availability to generate power is greater than the Bureau’s and the Corps’. This is true because, according to utility officials, utilities ensure that sufficient funds exist to repair and maintain their generating units and thus promote a high level of generating availability. According to officials from three investor-owned utilities or holding companies and four publicly owned utilities with an average of about 2,458 MW of hydropower generating capacity, their hydropower units were available at least 90 percent of the time—sometimes in ranges approximating or exceeding 95 percent. Some officials said they would not tolerate significant reductions in their generating availability because their hydropower units play key roles in meeting demand during peak times. Under the traditional regulatory compact between states’ public utility commissions and utilities, the utilities have an obligation to provide all existing and future loads in their service territories with power. According to utility officials, to comply with these obligations, utilities implement planning and budgeting systems that ensure that they can pay for all necessary maintenance costs as well as critical repairs and replacements in a timely fashion. According to some utility officials, unlike under the federal budgeting system, utilities typically have the financial capability to quickly obtain funding to pay for unexpected repairs to their power plants. According to these officials, utilities are also able to accumulate funds in reserves to meet future contingencies, such as unexpected breakdowns and repairs of generating units. In addition, issuing bonds but allowing work to begin prior to the bond’s issuance is another tool that utilities use to pay for and make repairs very quickly. For example, according to officials of the Douglas County Public Utility District, the utility district can respond quickly to an unexpected breakdown because (1) it has access to some reserve funds, (2) its commissioners can approve funding via the issuance of bonds up to 18 months after work was begun on a repair, and (3) its budgeting process is fast and accurate. For example, the utility district in January 1999 was completing work on the budget for the next fiscal year that would begin in only 8 months—namely, August 1999. The budget for the utility district’s hydropower project reflects funding requirements for operations, maintenance, anticipated repairs, and debt service, on the basis of the long-term operational and financial history for the project. According to Bonneville, the agency is achieving a similar effect by being able to quickly provide access to funds and establish reserve funds through agreements whereby its funds directly pay for the operation, maintenance, and repair of the Bureau’s and the Corps’ hydropower plants. In competitive markets, the price being charged for the electricity and the reliability of that electricity will continue to be important factors that consumers will consider when making purchasing decisions. On average, the electricity sold by the PMAs has been priced less than electricity from other sources. However, failing to adequately maintain and repair the federal hydropower plants causes costs to increase and decreases the reliability of the electricity. The PMAs’ rates will have to be maintained at competitive levels, and the reliability of this power will have to be maintained or enhanced to ensure that federal electricity remains marketable. In addition, the Congress, OMB, and we have been working to help ensure that the purchase and maintenance of all assets and infrastructure have the highest and most efficient returns to the taxpayer and the government. Delayed and unpredictable federal funding for maintenance and repairs have contributed to the decreased availability (and reliability) of the federal hydropower generating units as well as to higher costs that can cause rates to increase if those costs are included in the rates. However, in competitive markets, increased rates decrease the marketability of federal electricity, as nonfederal electricity rates are expected to decline. Customers are expected to have opportunities to buy electricity from any number of reasonably priced sources. If the PMAs’ rates are higher than prevailing market rates, customers will be less inclined to buy power from the PMAs. According to the Department of Energy’s Energy Information Administration, retail rates nationwide by 2015 may be about 6 to 19 percent (after inflation) below the levels that they would have been if competition had not begun. In certain PMA systems—for example, the Central Valley Project, which, as of fiscal year 1997, had an appropriated and other debt of about $267 million—the PMAs’ electricity (in this case, supplied by Western) is already facing competition from nonfederal generation. If the price of the PMAs’ electricity exceeds the market price, then its marketability would be hampered. “. . . financial viability would also be jeopardized if the gap between rates and the cost of alternative energy sources continues to narrow. Such a scenario could cause some customers to meet their energy needs elsewhere, leaving a dwindling pool of ratepayers to pay off the substantial debt accumulated from previous years.” In Bonneville’s service area, during the mid-1990s, competition decreased nonfederal electric rates, resulting in some customers leaving or buying power from less expensive sources, rather than continuing to buy from Bonneville. Similarly, in the case of the Tennessee Valley Authority (TVA)—a federally owned corporation that supplies electricity in Tennessee and six other Southeastern states), TVA’s sales to industrial customers declined from about 25 billion kWh in 1979 to 16 billion in 1993 after double-digit annual rate increases. Various actions have been used to fund the maintenance and repair of federal hydropower facilities. If these actions work as intended, they have the potential to deliver dollars for maintenance and repairs faster and with more certainty than before these actions were implemented. By enabling repairs to be made on time, they have the potential to help improve the reliability of the PMAs’ electricity and to continue its existing rate-competitiveness. Hence, these actions can help to secure the continued marketability of the PMAs’ electricity and promote the repayment of the appropriated and other debt. However, these various actions may reduce opportunities for congressional oversight of the operation, maintenance, and repair of federal plants and related facilities and reduce flexibility to make trade-offs among competing and changing needs. Aware of the problems involved in securing funding through federal appropriations, the Bureau, the Corps, the PMAs, and PMA customers have begun to take actions to secure the funding that is required to maintain and repair the federal hydropower plants and related facilities. An example is the Bureau’s, the Corps’, and Bonneville’s agreements in the Pacific Northwest, concluded from 1993 to 1997 and made pursuant to the Energy Policy Act and other statutes. According to Bureau officials, these funding arrangements were caused by budget cuts during the 1980s. They added that the need to perform about $200 million in electricity-related maintenance in the near future would strain the agency’s ability for maintenance and repairs in a steady, predictable fashion. These officials said that, as a result of these funding shortfalls, maintenance backlogs accumulated and the generating availability of the federal power plants in Bonneville’s service area declined from 92 to 82 percent. In response, in 1988, the Secretary of the Interior requested that the Congress authorize Bonneville to directly fund certain maintenance costs. Such authority was granted in provisions of the Energy Policy Act, which authorized the funding agreements between Bonneville, the Bureau, and the Corps. Under these agreements, Bonneville’s electricity revenues will directly pay for over $1 billion of routine operations and maintenance as well as capital repairs of the Bureau’s and the Corps’ electricity assets in Bonneville’s service territory. The agencies expect to be able to plan and pay for maintenance and repairs in a systematic, predictable manner over several years. The agencies expect that the resulting funding will allow them to respond with greater flexibility and speed to the need to repair hydropower plant equipment. According to Bonneville, the funding agreements will create opportunities for the increased availability of hydropower, financial savings, and the increased revenues. In addition, Bonneville believes that increased demand for its electricity and the increased financial resources provided by the funding agreements will improve its competitive viability and ability to recover the full cost of the electricity system from which it markets power. The Bureau and Bonneville signed two agreements for Bonneville’s electricity revenues to pay up front for capital repairs and improvements as well as ordinary operations and maintenance of the Bureau’s electricity assets in Bonneville’s service area. In January 1993, the Bureau and Bonneville executed an agreement that provided for funding by Bonneville of specific capital items, as provided by subsequent “subagreements.” To date, several subagreements have been signed under which Bonneville will pay, up front, up to about $200 million for major repairs of the Bureau’s hydropower plants in Bonneville’s service territory. For example, Bonneville will spend about $125 million from 1994 through 2007 for upgrades of the turbines of 18 generating units at the Bureau’s Grand Coulee power plant, in Washington State. In addition, in December 1996, the Bureau and Bonneville executed an agreement whereby Bonneville agreed to directly pay for the Bureau’s annual operations and maintenance costs as well as selected “extraordinary maintenance,” replacements, and additions. The parties anticipated that funding under terms of the agreement would total about $243 million—ranging from about $47 million to about $50 million per year from fiscal year 1997 to fiscal 2001. The Corps and Bonneville have also signed two agreements that allow Bonneville’s electricity funds to directly pay for the operation, maintenance, and repair of the Corps’ electricity assets. The first agreement, signed in 1994, was implemented by a series of subagreements, under which about $43 million in capital improvements and emergency repairs are being funded by Bonneville’s electricity revenues. For example, under one subagreement, about $29 million will be spent for reliability improvements at 21 of the Corps’ power plants throughout Bonneville’s service area. Bonneville is also paying for over $5 million in repairs at The Dalles, Oregon, power plant that were requested but not approved under the appropriations process. Other work at The Dalles is currently funded by appropriations. In December 1997, Bonneville and the Corps signed a second agreement under which Bonneville will directly pay for annual operations and maintenance expenses, for Bonneville’s share of joint project costs allocated to electricity revenues for repayment, and for some small replacements at the Corps’ projects from which Bonneville markets electricity. The implementation of this agreement will begin in fiscal year 1999 with an established budget of $553 million from fiscal 1999 through fiscal 2003—about $110 million per year. Because the implementation of the Pacific Northwest funding agreements is still relatively new, it is too early to determine if they will result in improvements to the availability factors of the Bureau’s and the Corps’ hydropower plants. At the same time, these efforts include a comprehensive attempt, that in our view, establishes systematic methods for identifying and budgeting for routine operations and maintenance, as well as for capital repairs, rehabilitations, and replacements of the federal hydropower plants in the region. For example, pursuant to the December 1996 funding agreement, the Bureau prepares an annual operations and maintenance budget by identifying major line items for each project during the next fiscal year. The Bureau also prepares 5-year budgets, on the basis of estimated budgets for each of the years that are included. The funding totals for the 5-year period cannot be exceeded, although any expenditures in a year that are less than the targeted amount are carried over to future years as accounted for in a “savings account.” The Bureau and Bonneville formed a “Joint Operating Committee” to vote on and approve the annual and 5-year budgets as well as any modifications to the budgets. Similarly, the December 1997 operations and maintenance funding agreement between the Corps and Bonneville features annual and 5-year budgets that are voted upon and approved by the Joint Operating Committee. Five-year budget totals cannot be exceeded without the Committee’s approval, but the reallocation of funds is possible. In addition, if “savings” occur in any year, they are shared between Bonneville and the Corps and/or carried over to future years. In addition, annual budgets are proposed and approved less than 1 year in advance instead of 2 to 3 years in advance—as under the traditional federal appropriations process. These budget practices reflect more immediate considerations and, in the views of agency officials, are more realistic than budgets that have to be compiled 2 to 3 years ahead of time. The potential advantages of the funding agreements in the Pacific Northwest include enhancing the agencies’ ability to accumulate funds in the “savings accounts” to pay for emergency repairs, as provided by the agreement. According to Bureau officials, the savings can be reallocated between projects on the basis of a telephone call between the Bureau and Bonneville. The ability of nonfederal utilities to quickly access reserve funds to meet emergency needs was mentioned by some nonfederal utilities when they discussed their planning and budgeting processes with us. In addition to the funds in savings, a variety of funding sources can be used to pay for maintenance and repairs, including emergency actions. For instance, according to Bureau officials, if unexpected repairs need to be performed, moneys to pay for them may be obtained via a subagreement between the Bureau and Bonneville. Work on the repairs could begin prior to Bonneville’s and the Bureau’s signing of the subagreement. According to Corps officials, some ongoing rehabilitations of the Corps’ Bonneville and The Dalles projects will continue to be funded with appropriations; however, maintenance or repairs to be supported under the funding agreements will no longer be included in the Corps’ budget requests for appropriations. To pay for the maintenance and repair of the Bureau’s and the Corps’ hydropower plants, Bonneville can use its cash reserves or its bonding authority. Because the agreements provide more secure and predictable funding, the Bureau and the Corps have begun to exercise greater flexibility in how they maintain and repair their hydropower plants. Consistent with evolving market competition and with the actions of nonfederal utilities, Bureau and the Corps officials said their personnel will rely less on traditional, prescheduled maintenance and rely more on newer, more flexible maintenance philosophies, such as reliability-centered maintenance. For example, according to Bureau officials at the agency’s Pacific Northwest region, staff at the region’s electricity projects schedule maintenance and repairs, in part, by using a database that shows when maintenance and repairs were last performed and when a part may need maintenance or repairs in the future. Repairs or upgrades will be increasingly made “just-in-time” on the basis of test results. Bureau officials characterized their maintenance philosophy as evolving to be more responsive to Bonneville’s marketing requirements as well as to reduce costs. According to these officials, because they now have funds that can be used to pay for emergency repairs, they can take prudent risks in managing their maintenance requirements by deferring some repairs that perhaps can be made just in time or repairing other items that may have higher priority. For example, according to the managers of the Grand Coulee power plant, the new funding flexibility allowed the Bureau to reschedule the spending of up to about $3 million on repairs at the plant. Direct contributions from customers have been suggested and implemented as one way to improve how the Bureau, the Corps, and the PMAs pay for repairs. Although the use of nonfederal funds to finance federal agencies’ operations is generally prohibited unless specifically authorized by law, several forms of alternative financing have been statutorily authorized by the Congress. Supporters of alternative financing, among them officials from the Bureau, the Corps, the PMAs, and the PMAs’ electricity customers, note that alternative financing allows repairs and improvements to be made more expeditiously and predictably than through the federal appropriations process. They believe that alternative financing could provide more certainty in funding repairs and help address problems such as deferred maintenance at federal plants. Through one type of authorized arrangement, referred to, among other names, as “advance of funds,” nonfederal entities, such as preference customers, pay up front for repairs and upgrades of the federal hydropower facilities. Under federal statutes, such funding must be ensured before work on a project can be started. Such funding arrangements have been proposed and/or implemented in a variety of PMA systems, most prominently Western’s Pick-Sloan Program in Montana, North Dakota, South Dakota, and several neighboring states; Loveland Area Projects in Colorado and nearby states; Hoover and Parker-Davis projects in Arizona and Nevada; and Central Valley Project in California. For example, under an agreement executed on November 12, 1997, by the Bureau, Western, and Western’s power customers within the Central Valley Project, the customers agreed to pay up front for electricity-related operations and maintenance and certain capital improvements. These activities are specified in a funding plan developed by a Governance Board that represents the Bureau, Western, and the electricity customers. In approving spending proposals, the Bureau and Western have veto power and two-thirds of the customers represented on the Board must approve a proposal for it to pass. The customers will be reimbursed for their contributions by credits on their monthly electricity bills. However, advance of funds agreements generally are limited in their ability to free the funding for the maintenance and repair of federal electricity assets from the uncertainties of the federal budget process. They supplement rather than completely replace federal appropriations and, therefore, may enhance the certainty of funding for repairs and maintenance but not necessarily provide more speed in obtaining that funding. For example, in Bonneville’s service territory, Bonneville, the Bureau, and the Corps can budget 1 year in advance; however, under the Central Valley Project agreement, the Governance Board approves electricity-related operations and maintenance budgets 3 years in advance to coincide with the federal budget and appropriation cycles for the Bureau and Western. The dovetailing is necessary because federal appropriations are counted upon to fund the balance of the maintenance and repairs of the federal electricity assets. Depending on how they are implemented, the direct funding of maintenance and repairs by electricity revenues and agreements for funding by customers pose the risk that opportunities for oversight by external decisionmakers, such as the Congress, will be diminished. Also, the lack of oversight limits Congress’s flexibility to make trade-offs among competing needs. As the Congress and other decisionmakers examine the need for new arrangements to fund the maintenance and repair of federal hydropower plants, they may need to consider any reduced opportunities for oversight, along with the potential benefits of these funding arrangements. At this time, the Bureau, the Corps, and the PMAs provide such information as the history and background of their power plants; the power plants’ generating capacity and electricity produced; annual electricity revenues, costs, and the repayment status; and related environmental and water quality issues, to the Congress, other decisionmakers, and to the public in general. The means of communicating this information include the PMAs’ annual reports; the PMAs’; the Bureau’s, and the Corps’ Internet Websites; and letters to the appropriate congressional committees. As requested by the Chairman, Subcommittee on Water and Power, House Committee on Resources, we examined (1) the reliability of the Bureau’s and Corps’ hydropower plants in generating electricity compared with the reliability of nonfederal hydropower plants; (2) reasons why the Bureau’s and the Corps’ plants may be less reliable than nonfederal plants and the potential implications of reduced reliability; and (3) actions taken to obtain funding to better maintain and repair the Bureau’s and the Corps’ plants. To compare the generating reliability of the Bureau’s and the Corps’ hydropower plants with that of nonfederal ones, we obtained, analyzed, and contrasted power plant performance data, including availability and outage factors, from the Bureau, the Corps, and the North American Electric Reliability Council (NERC). NERC is a membership of investor-owned, federal, rural electric cooperatives, state/municipal/provincial utilities, independent power producers, and power marketers, whose mission is to promote the reliability of the electricity supply for North America. NERC compiles statistics on the performance of classes of generating units, such as fossil, nuclear, and hydro. The statistics are calculated from data that electric utilities report voluntarily to NERC’s Generating Availability Data System. The data reported to NERC exclude many hydropower units, which, on average, are smaller in generating capacity than those that report to NERC. According to the Department of Energy’s Energy Information Administration, as of January 1998, hydropower in the United States was generated by a total of 3,493 generating units with a capacity of 91,871 megawatts (MW). As shown in table I.1, the federal and nonfederal hydropower generating units included in our report totaled 1,107 generating units and had a total generating capacity of 70,005 MW, or an average generating capacity of 63.2 MW per unit. Therefore, the nonreporting units totaled 2,386, and had a total generating capacity of 21,866 MW, or an average generating capacity of 9.2 MW per unit. To compare the performance of federal hydropower generating units with that of nonfederal units, we used data on hydropower generating units from NERC’s database that excluded federal hydropower generating units. We did not evaluate NERC’s validation of the industry’s data, nor the specific input data used to develop the database. We collected 1998 availability and outage data for the Bureau and the Corps, but we did not present it in our graphs because comparative data for the nonfederal units were not available from NERC at the time we completed our study. We also did not evaluate the specific input data used by the Corps and the Bureau to develop their databases on the performance of federal generating units. Table I.1 depicts some of the characteristics of the hydropower generating units included in our analysis of the performance of the Bureau’s, the Corps, and industry’s generating units. Data for nonfederal units is from 32 nonfederal utilities. Average age of generating units (years) Nameplate capacity of generating units (MW) Average nameplate capacity of generating units (MW) We discussed the limitations of these performance indicators with officials from the Bureau, the Corps, the Tennessee Valley Authority, investor-owned utilities, publicly owned utilities, and other experts in the electric utility industry. To explore why federal hydropower plants sometimes performed at lower levels, we obtained and analyzed various reports on the subject, and discussed the topic with representatives of Bonneville, the Bureau, the Corps, various pwer maketing administration (PMA) power customers or their associations, investor-owned utilities, and nonfederal, publicly owned utilities. In our analysis, we included information obtained from the Tennessee Valley Authority, a federally owned utility with high performance indicators and significant hydropower resources. In addressing the implications of any reduced performance by federal plants, we interviewed industry experts, representatives of investor-owned and publicly owned utilities, and officials of PMA power customers. We also examined studies about the changes in electricity markets and contacted national and regional trade associations. Moreover, we addressed alternative ways of ensuring the enhanced funding of maintenance and repairs of the federal hydropower plants and related facilities. In this regard, to the extent possible, we relied upon previous work that we had performed on federal power, especially work performed during two prior reviews: Federal Power: Options for Selected Power Marketing Administrations: Role in a Changing Electricity Industry (GAO/RCED-98-43, Mar. 6, 1998) and Federal Power: Outages Reduce the Reliability of Hydroelectric Power Plants in the Southeast (GAO/T-RCED-96-180, July 25, 1996). Moreover, we examined the Corps’, the Bureau’s, and the PMAs’ efforts to make power revenues directly finance the maintenance and repair of federal hydropower assets. In this regard, we contacted the Bureau, the Corps, Bonneville, Western, and the PMAs’ power customers and examined various agreements of arrangements to pay for the maintenance and repair of the federal hydropower plants and related facilities. Our work was performed at various locations, including the offices of federal and nonfederal parties. Regarding the Corps, these locations include the agency’s headquarters, Washington, D.C.; the Northwestern Division, Portland, Oregon; the Portland, Oregon, District; and the Nashville, Tennessee, District. Because the Corps’ power operations have been affected by the need to accommodate the migrations of salmon, we also contacted the Walla Walla and Seattle, Washington, Districts, and the Corps’ Bonneville (Oregon) power plant. We visited the Bureau’s offices at the Department of the Interior in Washington, D.C.; Denver, Colorado; the Central Valley Operations Office, Sacramento, California; the Pacific Northwest Region, Boise, Idaho; and the Grand Coulee, Washington, power plant. To gain a perspective on how another federal electricity-generating entity operated its hydropower program, we interviewed TVA officials in Chattanooga, Tennessee. Moreover, we contacted the PMAs at locations including their Power Marketing Liaison Office, U.S. Department of Energy, Washington, D.C.; Bonneville in Portland, Oregon; Southeastern in Elberton, Georgia; Southwestern in Tulsa, Oklahoma; and Western in Golden and Loveland, Colorado, and Folsom, California. Our scope included contacting several PMA customers or associations that represent PMA customers, including the City of Roseville, California; Colorado River Energy Distributors Association, Tuscon, Arizona; the Midwest Electric Consumers Association, Denver, Colorado; the Northern California Power Agency, Roseville, California; and the Sacramento (California) Municipal Utility District. In addition, we contacted several investor-owned utilities, utility holding companies, and nonfederal publicly owned utilities (other than those previously listed) that operate significant amounts (collectively, over 17,000 MW) of hydropower -generating capacity; they included the Chelan County (Washington) Public Utility District; Idaho Power Company; Grant County (Washington) Public Utility District; Douglas County (Washington) Public Utility District; New York Power Authority in Niagara, New York; Pacific Gas and Electric Company, Sacramento, California; South Carolina Electric and Gas; and the Southern Company in Atlanta, Georgia. Our work was performed from July 1998 through February 1999 in accordance with generally accepted government auditing standards. On March 6, 1999, the Department of Energy provided technical suggestions for the draft report but deferred to the comments of the Bureau and the Corps on more substantive matters. For example, Energy suggested that we clarify the differences between “reliability” and “availability.” The report already discussed that plants are viewed as reliable, within the electric utility industry, if they can function without failure over a specific period of time or amount of usage. The report also demonstrates that there are several ways of measuring reliability, including the availability factor and outage factors. Accordingly, we made no substantive changes to the report. The following are GAO’s comments on the Department of the Interior’s (including the Bureau of Reclamation’s) letter dated March 12, 1999. Interior provided us with comments that were intended to clarify its position regarding reliability measures, operation and maintenance, and funding mechanisms. 1. In its cover letter and general comments, Interior stated that the report does a good job in recognizing the funding needs for operating and maintaining electrical-generating facilities. However, Interior stated the report does not articulate in the executive summary, as it does in the body, the initiatives undertaken by the Bureau and the Corps to identify alternative funding sources. We believe that the executive summary adequately addresses the issue of the initiatives undertaken by the Bureau, the Corps, and the PMAs, particularly as they relate to efforts in the Pacific Northwest. Therefore, we did not revise our report. 2. In its cover letter and in general comments, Interior stated that the report does not articulate the fact that the Bureau’s facilities are operated to fulfill multiple purposes, such as providing water for irrigation, municipal and industrial uses, fish and wildlife enhancement, and electricity generation. According to the Bureau, if water is frequently not available for generating electricity, the availability factor is not a good indicator for comparing the reliability of the Bureau’s hydropower-generating units with other units that are not operated under multipurpose requirements. Interior also suggested that the nonfederal projects are freer to maximize power and revenues because they are less affected by multiple purposes. We disagree with the Bureau’s position that the report does not recognize that water is used for multiple purposes and affects how electricity is generated. For example, the executive summary recognizes that the Bureau and the Corps generate electricity subject to the use of water for flood control, navigation, irrigation, and other purposes. In addition, the report recognizes, in chapter 2, that the Bureau and other utilities utilize periods of low water and low demand to perform scheduled maintenance and repairs. This would tend to decrease the availability factors of these entities. The report also states that this practice may be regarded as good business practices. We further disagree that the availability factor is not a good basis for comparing the reliability of different projects. The availability factor is a widely accepted measure of reliability that has validity, as long as it is understood in terms of other factors that affect how plants are operated. Moreover, we disagree that other utilities necessarily operate hydropower plants that are affected less by multiple purposes. In fact, as we have noted previously, for other utilities, the multiple uses of the water are regulated through conditions in the utilities’ hydropower-plant-operating licenses, which are issued by the Federal Energy Regulatory Commission. The Bureau contends that the availability of its plants is affected by the fact that more of the Bureau’s plants are located on pipelines, canals, and water diversion facilities than most nonfederal plants. We recognized this point in chapter 2. 3. In the cover letter and in its general comments, Interior stated that the forced outages factor is a better indicator of reliability than the availability factor for multiple purpose facilities. In addition, in its cover letter, Interior indicated that the Bureau’s benchmarking studies indicate that its plants compare favorably with other plants in the area of reliability. Regarding forced outages factors, our report recognizes that there are several indicators of reliability and the forced outages factor is one of most meaningful. More generally, we disagree with Interior’s conclusion that the Bureau’s plants are as reliable as those of other power providers. As shown in chapter 2 of this report, although the Bureau’s forced outages factors are on par with those of nonfederal utilities, the Bureau’s availability factor is lower, and it has been improving. Moreover, the Bureau’s scheduled outages factors are higher than nonfederal utilities. In its general comments, Interior adds that reliability is a measure of whether a plant can operate when it is needed, while availability is a measure of a unit’s ability to operate within a given time period. These factors, stated the Bureau, can be equated only when a plant is required to operate for the full time of the period. The Bureau added that optimum availability is unique to each plant, depending on such factors as design, time, water supply, location, and cost. As stated in our report, reliability is a measure of a plant’s ability to operate over a specific period or amount of usage. We further agree that the significance of an availability factor should be understood within the context of various factors, some of which are mentioned by the Bureau. We revised chapter 1 to recognize that assessing the performance of a hydropower plant or unit by examining its availability factor calls for understanding additional variables. We added language to reflect that the availability factor needs to be understood in terms of such factors as the kind of demand the plant meets (e.g., whether it meets peak demand), the availability of water throughout the year, and the purposes satisfied by the dam and reservoir. 4. According to Interior, the report implies that the Bureau delays repairs and maintenance, pending the availability of funds. The Bureau stated that it performs repairs and maintenance when needed by reprioritizing funds. We revised the report in chapter 3 to recognize the Bureau’s statement that it reprioritizes funding. However, the example of the delayed repairs because of delayed funding at the Bureau’s Shasta, California, project, illustrates our point that repairs and maintenance are delayed when funds are not forthcoming. 5. Interior stated that the Bureau has undertaken a program to improve its performance by benchmarking its electricity operations against the rest of the industry and is continually striving to improve, given the legal and financial constraints encountered. Our report does not imply that these agencies are operating in an unbusinesslike manner but shows that the Bureau’s availability has improved in the face of financial and budgeting constraints. We revised chapter 1 to recognize the Bureau’s benchmarking effort. 1. Interior commented that the title of the report implies that the Bureau has reduced its operation and maintenance program. Interior stated the Bureau has always implemented preventive and reliability-centered maintenance and that adequate funding for these activities has been available. Chapter 4 of the report recognizes that the Bureau, in particular in the Pacific Northwest, will increasingly practice reliability-centered maintenance and practiced preventive maintenance in the past. However, the efforts of the Bureau’s field locations to engage in direct or advance funding arrangements serves as evidence that faster and more predictable funding is needed. 2. We added “transmission system” to the report, as requested by Interior. 3. We revised the report to indicate that the Bureau’s forced outages rate from 1993 through 1997 was the same as the nonfederal sector’s. 4. According to Interior, our comparing plants of different size and type may distort our conclusions about the performance of the federal and nonfederal plants. We disagree. As shown in appendix I, the federal and nonfederal electrical-generating units in our analysis were about same size because our analysis of nonfederal units excluded about 2,400 smaller ones that averaged about 9 MW of generating capacity. In addition, our decision to include both conventional generators and pump generators in our analysis was based on the fact that the Corps’ performance data did not separate its conventional and pump units. The Bureau itself, in its 1996 benchmarking study, included seven pump units (about 323 MW) at its Grand Coulee and Flatiron plants as conventional generating units. Moreover, although the Bureau has generating units from 1 MW to 700 MW, it used only two MW-size categories (1 to 29 MW, and 30 MW and larger) in comparing the availability and outages factors of its plants to the industry in its 1996 benchmarking study. In addition, our analysis of the availability factors of the Bureau’s hydropower-generating units from 1993 through 1997 showed that among pump generators as well as the size categories zero to 10 MW, 11 to 50 MW, 51 to 100 MW, and 101 to 200 MW, the Bureau’s hydropower units had lower availability factors than the industry as a whole. 5. According to Interior, although the customers funded up to $22 million in repairs for Shasta, the rewind contract was awarded for $8.8 million, including total costs to replace the turbines estimated at $12.2 million. This point is expanded upon under comment 12. 6. Interior disagrees with the statement in the draft report that advance or direct funding arrangements decrease opportunities for congressional oversight. We revised the report to state that, although these arrangements could diminish opportunities for oversight, the Bureau, the Corps, and the PMAs provide such information as the history and background of their power plants; the plants’ generating capacity and electricity produced; annual electricity revenues and costs; and related environmental and water availability issues to the Congress, other decisionmakers, and to the public. The means of communicating this information include the PMAs’ annual reports; the PMAs’, the Bureau’s, and the Corps’ Internet Websites; and letters to the Congress. 7. According to Interior, our statement that “the longer the scheduled outage, the less efficient the maintenance program,” is out of place as it pertains to federal plants. The statement would apply primarily to run-of-the-river plants, according to Interior. The Department noted that federal plants are not allowed to earn more revenues and outages do not have an impact on revenues if water is not available for generating electricity. We believe our report sufficiently addresses these points. We have already noted that performing scheduled outages during times of low water or low demand may constitute good business practice. In addition, we have noted the need to understand such factors as the kind of demand a plant meets (for instance, whether it meets peak demand) and the availability of water for generating power. Our report also states that, as markets evolve to become more competitive, operating plants at higher availability factors may allow the PMAs and utilities to take advantage of new opportunities to earn revenue by selling ancillary services. In addition, we continue to believe that, all things being equal, having plants on-line for longer periods of time is also good business practice, as stated in the Bureau’s 1996 benchmarking report.8. As suggested by Interior, we revised the report to read “Western Systems Coordinating Council.” 9. As suggested by Interior, we revised the report to reflect that three of five units at Shasta were repaired. The other two were not. 10. In response to Interior’s comment, we revised the report to reflect that while the Bureau defers noncritical items, it does not defer items it deems to be critical. Interior also notes that unfunded maintenance requirements do not necessarily indicate a deferred maintenance situation. In our view, any maintenance requirements that are put off until the future are deferred. However, we revised the report to state that deferred maintenance is not problematic as long as it can be managed. 11. As requested by Interior, we added “due to limited and declining federal budgetary sources.” 12. Interior clarified that the costs of rewinding the Shasta units decreased from $10.5 million (low bid) in 1994 to $8.8 million in 1996. The rewind contract was executed in 1996 to increase the rating to 142 MW per unit versus the higher-priced rewind in 1994 to 125 MW per unit. Most importantly, the $21.5 million commitment includes the replacement of turbines in three units that were not included in earlier cost estimates. Because of the new information provided regarding the nature of the additional work at Shasta, we revised our report in chapter 3 to state that the Bureau expanded the scope of work to be performed at the plant. 13. As suggested by Interior, we revised the text to state that the funding arrangements in the Pacific Northwest were necessitated by budget cuts during the 1980s. Also, the need to fund about $200 million in maintenance in the near term would limit the Bureau’s ability to pay for maintenance and repairs in a steady, predictable fashion. 14. As suggested by Interior, we deleted the word “electricity” from the reference to the Bureau’s operation and maintenance budget. 15. As suggested by Interior, we revised the text to eliminate references to “separate” Joint Operating Committees. 16. As suggested by Interior, we changed “defer spending” to “reschedule.” On March 16, 1999, the Department of Defense (including the Army Corps of Engineers) provided us with a letter acknowledging that the Corps’ verbal comments, discussed with us at a March 10, 1999, meeting, had been resolved. The primary verbal comment was that we did not reflect changes in the performance of the Corps’ hydropower plants that occurred in fiscal year 1998. The Corps suggested that we include these data in various graphs in our report. As discussed with Corps officials, we addressed the changes in the Corps’ performance in the text of our report, primarily in chapter 2. However, we declined to show changes in the graphs because the 1998 data were not available for the nonfederal hydropower generating units at the time we completed our review. The following are GAO’s comments on the Department of Energy’s (including the Bonneville Power Administration’s) letter dated March 11, 1999. On March 11, 1999, Bonneville provided us with general and specific comments regarding our draft report. Bonneville noted that in its view, we “sought to conduct a fair assessment of the U.S. Army Corps of Engineers (Corps) and the Bureau of Reclamation (Reclamation) facilities during the time of the study.” 1. Bonneville understood that we were not requested to evaluate the direct-funding agreements in the Pacific Northwest. However, Bonneville suggested that we add language to the report to reflect that the funding agreements between itself, the Bureau, and the Corps contain a systematic approach to maintenance planning and investment that creates opportunities for increased hydropower availability, financial savings, and increased revenues. We believe that our report addresses these points. However, we added language that Bonneville believes these enhancements will be attained as a result of the funding agreements. 2. As noted by Bonneville, our report stated that the availability factors of the Bureau’s and the Corps’ hydropower plants in the Pacific Northwest are lower than in the rest of the nation. Bonneville suggested that we clarify the report, in the executive summary, by stating that Bonneville, the Bureau, and the Corps recognized the lower reliability of the plants in the Pacific Northwest and took action through a series of direct-funding agreements to address the problem. Bonneville further suggested a clarification that during the period 1993 through 1997, the federal agencies undertook extensive upgrades and rehabilitations of the Bureau’s plants partly as a result of the increased funding flexibility provided by the direct-funding agreements. We agreed that these statements would clarify the report and incorporated them. 3. Bonneville noted that the draft report stated that funding maintenance and repair actions through direct customer contributions or through direct payments from the PMAs’ revenues reduced opportunities for congressional oversight. According to Bonneville, the funding arrangement in the Pacific Northwest was specifically supported by the Senate Appropriations Committee in 1997. Bonneville also stated that its annual congressional budget submission includes programmatic information on the operations and maintenance funding that Bonneville plans to provide for the Bureau and the Corps. In response to this and other comments, we revised the executive summary and chapter 4 to show that information is now being made available to the Congress and others about the operation of the federal power program. For instance, the Bureau, the Corps, and the PMAs provide such information as the history and background of their power plants; the plants’ generating capacity and electricity produced; annual electricity revenues and costs; and related environmental and water quality issues to the Congress, other decisionmakers, and to the public. The means of communicating this information include the PMAs’ annual reports; the PMAs’, the Bureau’s, and the Corps’ Internet Websites; and letters to the appropriate congressional committtees. 4. We revised the executive summary as recommended by Bonneville by adding “under the traditional appropriations process.” 5. Bonneville believed that the location of figure 1 in the executive summary was confusing, since it discussed national availability factors but was positioned over the discussion of availability in the Pacific Northwest. We agree and have relocated the figure. 6. The draft’s executive summary stated that some of Bonneville’s power customers are leaving the agency for less-expensive sources. Bonneville stated that some customers left the power administration in an earlier period but the situation today is significantly different, with demand for electricity and other products exceeding the supply. Bonneville stated that increasing demand for its electricity as well as the increased financial resources provided by its funding agreements with the Bureau and the Corps will improve its competitive viability and ability to recover the full cost of the Federal Columbia River Power System. We agreed and revised the report in the executive summary and chapter 4. 7. Bonneville suggested that the final report recognize that, for large hydropower systems, the ability to earn electricity revenues depends on the availability of water and of operable hydropower-generating units. These conditions and other factors must be considered to optimize the maintenance program for the plants from which Bonneville markets electricity. We agreed and revised chapter 3 accordingly. 8. As suggested by Bonneville, we added language to chapter 3 to the effect that like the Douglas County Public Utility District, Bonneville will be able to quickly provide access to funds and establish reserved funds through agreements whereby its funds directly pay for the operation, maintenance, and repair of the Bureau’s and the Corps’ hydropower plants. Peg Reese Philip Amon Ernie Hazera Martha Vawter The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the: (1) reliability of the Bureau of Reclamation's and Army Corps of Engineers' hydropower plants in generating electricity compared with the reliability of nonfederal hydropower plants; (2) reasons why the Bureau's and the Corps' plants may be less reliable than nonfederal plants and the potential implications of reduced reliability; and (3) actions taken to obtain funding to better maintain and repair the Bureau's and the Corps' plants. GAO noted that: (1) the Bureau's and the Corps' hydropower plants are generally less reliable in generating electricity than nonfederal hydropower plants; (2) the reliability of the Bureau's hydropower plants has improved recently, while the Corps' has remained relatively unchanged; (3) from 1993 through 1997, the Bureau's units were available to generate electricity an average of about 83 percent of the time compared with about 91 percent for nonfederal units; (4) the availability of the Bureau's units to generate electricity improved from about 81 percent of the time in 1993 to about 87 percent in 1997; (5) the Corps' units were available to generate electricity an average of about 89 percent of the time during the period 1993 through 1997; (6) the Bureau's and the Corps' units in the Pacific Northwest were available about 79 percent and 85 percent of the time, respectively; (7) the Bureau's and the Corps' plants were less reliable because they could not always obtain funding for maintenance and repairs when needed; (8) GAO found that because of uncertain funding, the agencies delay repairs and maintenance until funds become available; (9) GAO also found that these delays caused frequent, extended outages and inconsistent plant performance; (10) the power marketing administrations' (PMA) electricity is generally priced less than other electricity; (11) however, as markets become more competitive, PMAs' customers will have more suppliers from whom they can buy electricity; (12) as nonfederal electricity rates decline in competitive markets, a portion of the federal government's appropriated and other debt of about $22 billion may be at risk of nonrecovery if the federal electricity does not continue to be marketable; (13) a factor affecting the marketability of this electricity is its reliability; (14) Congress, the Office of Management and Budget, and GAO have been working to help ensure that the purchase and maintenance of all assets and infrastructure have the highest and most efficient returns to the taxpayer and the government; (15) the Bureau, the Corps, and the PMAs have taken actions to obtain funding to maintain and repair their hydropower plants; (16) these actions involve directly funding maintenance and repairs from the PMAs' electricity revenues or from funds contributed by the power customers; and (17) by enabling repairs to be made in a timely manner, these actions have the potential to help to improve the reliability of the PMAs' electricity and to continue their existing rate-competitiveness. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The X-33 and X-34 programs were part of an effort that began in 1994— known as the Reusable Launch Vehicle Technology/Demonstrator Program (Reusable Launch Vehicle Program)—to pave the way to full- scale, commercially-developed, reusable launch vehicles reaching orbit in one stage. In embarking on the Reusable Launch Vehicle Program, NASA sought to significantly reduce the cost of developing, producing and operating launch vehicles. NASA’s goal was to reduce payload launch costs from $10,000 per pound on the space shuttle to $1,000 per pound. It planned to do so, in part, by finding “new ways of doing business” such as using innovative design methods, streamlined acquisition procedures, and creating industry-led partnerships with cost sharing to manage the development of advanced technology demonstration vehicles. The vehicles were seen as the “stepping stones” in what NASA described as an incremental flight demonstration program. The strategy was to force technologies from the laboratory into the operating environment. The X-34 Project started in 1995 as a cooperative agreement between NASA and Orbital Sciences Corporation (Orbital). The project was to demonstrate streamlined management and procurement, industry cost sharing and lead management, and the economics of reusability. However, the industry team withdrew from the agreement in less than 1 year, for a number of reasons including changes in the projected profitability of the venture. NASA subsequently started a new X-34 program with a smaller vehicle design. It was intended only as a flight demonstration vehicle to test some of the key features of reusable launch vehicle operations, such as quick turn-around times between launches. Under the new program, NASA again selected Orbital as its contractor in August 1996, awarding it a fixed price, $49.5 million contract. Under the new contract, Orbital was given lead responsibility for vehicle design, fabrication, integration, and initial flight testing for powered flight of the X-34 test vehicle. The contract also provided for two options, which were later exercised, totaling about $17 million for 25 additional experimental flights and, according to a project official, other tasks, including defining how the flight tests would be undertaken. Under the new effort, NASA’s Marshall Space Flight Center was to develop the engine for the X-34 as part of its Low Cost Booster Technology Project. The initial budget for this development was about $18.9 million. In July 1996, NASA and Lockheed Martin Corporation and its industry partners entered into a cooperative agreement for the design, development, and flight-testing of the X-33. The X-33 was to be an unmanned technology demonstrator. It would take off vertically like a rocket, reaching an altitude of up to 60 miles and speeds to about Mach 13 (13 times the speed of sound), and land horizontally like an airplane. The X-33 would flight test a range of technologies needed for future launch vehicles, such as thermal protection systems, advanced engine design and lightweight fuel tanks made of composite materials. The vehicle would not actually achieve orbit, but based on the results of demonstrating the new technologies, NASA envisioned being in a better position to make a decision on the feasibility and affordability of building a full-scale system. Under the initial terms of the cooperative agreement, NASA’s contribution was fixed at $912.4 million and its industry partners’ initial contribution was $211.6 million. In view of the potential commercial viability of the launch vehicle and its technologies, the industry partners also agreed to finance any additional costs. During a test in November 1999, one of the fuel tanks failed due to separation of the composite surface. Following the investigation, NASA and Lockheed Martin agreed to replace the composite tanks with aluminum tanks. In February 2001, NASA announced it would not provide any additional funding for the X-33 or X-34 programs under its new Space Launch Initiative. The Space Launch Initiative is intended to be a more comprehensive, long-range plan to reduce high payload launch costs. NASA’s goal is still to reduce payload launch cost to $1,000 per pound to low Earth orbit but it is not limited to single-stage-to-orbit concepts. Specifically, the 2nd Generation Program’s objective is to substantially reduce the technical, programmatic, and business risks associated with developing reusable space transportation systems that are safe, reliable and affordable. NASA has budgeted about $900 million for the SLI initial effort and, in May 2001, it awarded initial contracts to 22 large and small companies for space transportation system design requirements, technology risk reduction, and flight demonstration. In subsequent procurements in mid- fiscal year 2003, NASA plans to select at least two competing reusable launch system designs. The following 2.5 to 3.5 years (through fiscal years 2005 or 2006) will be spent finalizing the preliminary designs of the selected space transportation systems, and maturing the specific technologies associated with those high-risk, high-priority items needed to develop the selected launch systems. Undertaking ambitious, technically challenging efforts like the X-33 and X- 34 programs—which involve multiple contractors and technologies that have yet to be developed and proven—requires careful oversight and management. Importantly, accurate and reliable cost estimates need to be developed, technical and program risks need to be anticipated and mitigated, sound configuration controls need to be in place, and performance needs to be closely monitored. Such undertakings also require a high level of communication and coordination. Not carefully implementing such project management tools and activities is a recipe for failure. Without realistically estimating costs and risks, and providing the reserves needed to mitigate those risks, management may not be in a position to effectively deal with the technical problems that cutting-edge projects invariably face. In fact, we found that NASA did not successfully implement and adhere to a number of critical project management tools and activities. Specifically: NASA did not develop realistic cost estimates in the early stages of the X- 33 program. From its inception, NASA officials considered the program to be high risk, with a success-oriented schedule that did not allow for major delays. Nevertheless, in September 1999, NASA’s Office of the Inspector General (OIG) reported that NASA’s cost estimate did not include a risk analysis to quantify technical and schedule uncertainties. Instead, the cost estimate assumed that needed technology would be available on schedule and as planned. According to the OIG, a risk analysis would have alerted NASA decision-makers to the probability of cost overruns in the program. Since NASA’s contribution to the program was fixed—with Lockheed Martin and its industry partners responsible for costs exceeding the initial $1.1 billion—X-33 program management concluded that there was no risk of additional government financial contributions due to cost overruns. They also believed that the projected growth in the launch market and the advantages of a commercial reusable launch vehicle would provide the necessary incentive to sustain industry contributions. NASA did not prepare risk management plans for both the X-33 and X-34 programs until several years after the projects were implemented. Risk management plans identify, assess, and document risks associated with cost, resource, schedule, and technical aspects of a project and determine the procedures that will be used to manage those risks. In doing so, they help ensure that a system will meet performance requirements and be delivered on schedule and within budget. A risk management plan for the X-34 was not developed until the program was restructured in June 2000. Although Lockheed Martin developed a plan to manage technical risks as part of its 1996 cooperative agreement for the X-33, NASA did not develop its own risk management plan for unique NASA risks until February 2000. The NASA Administrator and the NASA Advisory Council have both commented on the need for risk plans when NASA users partnering arrangements such as a cooperative agreement. Furthermore, we found that NASA’s risk mitigation plan for the X-33 program provided no mechanisms for ensuring the completion of the program if significant cost growth occurred and/or the business case motivating industry participation weakened substantially. Sept. 24, 1999. responsibility for key tasks and deliverables and provide a yardstick by which to measure the progress of the effort. According to the OIG, NASA did not complete a configuration management plan for the X-33 until May 1998—about 2 years after NASA awarded the cooperative agreement and Lockheed Martin began the design and development of a flight demonstration vehicle. Configuration management plans define the process to be used for defining the functional and physical characteristics of a product and systematically controlling changes in the design. As such, they enable organizations to establish and maintain the integrity of a product throughout its lifecycle and prevent the production and use of inconsistent product versions. By the time the plan was implemented, hardware for the demonstration vehicle was already being fabricated. Communications and coordination were not effectively facilitated. In a report following the failure of the X-33’s composite fuel tank, the investigation team reported that the design of the tank required high levels of communication, and that such communication did not occur in this case. A NASA official told us that some NASA and Lockheed personnel, who had experience with composite materials and the phenomena identified as one of the probable causes for the tank’s failure, expressed concerns about the tank design. However, because of the industry-led nature of the cooperative agreement, Lockheed Martin was not required to react to such concerns and did not request additional assistance from NASA. The Government Performance and Results Act of 1993 requires federal agencies to prepare annual performance plans to establish measurable objectives and performance targets for major programs. Doing so enables agencies to gauge the progress of programs like the X-33 and X-34 and in turn to take quick action when performance goals are not being met. For example, we reported in August 1999 that NASA’s Fiscal Year 2000 Performance Plan did not include performance targets that established a clear path leading to a reusable launch vehicle and recommended such targets be established. Without relying on these important project management tools up front, NASA encountered numerous problems on both the X-33 and X-34 programs. Compounding these difficulties was a decrease in the projected commercial launch market, which in turn lessened the incentive of NASA’s X-33 industry partners to continue their investments. In particular, technical problems in developing the X-33’s composite fuel tanks, aerospike engines, heat shield, and avionics system resulted in significant schedule delays and cost overruns. After two program reviews in 1998 and 1999, the industry partners added a total of $145.6 million to the cooperative agreement to pay for cost overruns and establish a reserve to deal with future technical problems and schedule delays. However, NASA officials stated that they did not independently develop their own cost estimates for these program events to determine whether the additional funds provided by industry would be sufficient to complete the program. Also, these technical problems resulted in the planned first flight being delayed until October 2003, about 4.5 years after the original March 1999 first flight date. After the composite fuel tank failed during testing in November 1999, according to NASA officials, Lockheed Martin opted not to go forward with the X-33 Program without additional NASA financial support. Lockheed Martin initially proposed adding $95 million of its own funds to develop a new aluminum tank for the hydrogen fuel, but also requested about $200 million from NASA to help complete the program. Such contributions would have increased the value of the cooperative agreement to about $1.6 billion or about 45 percent (about $500 million) more than the $1.1 billion initial cooperative agreement funding. NASA did not have the reserves available to cover such an increase. The agency did, however, allow Lockheed Martin to compete, in its 2nd Generation Program solicitation for the additional funds Lockheed Martin believed it needed to complete the program. Similarly, NASA started the X-34 Project, and the related NASA engine development project, with limited government funding, an accelerated development schedule, and insufficient reserves to reduce development risks and ensure a successful test program. Based on a NASA X-34 restructure plan in June 2000, we estimate that NASA’s total funding requirements for the X-34 would have increased to about $348 million—a 307-percent ($263 million) increase from the estimated $86 million budgeted for the vehicle and engine development projects in 1996. Also, since 1996, the projected first powered flight had slipped about 4 years from September 1998 to October 2002 due to the cumulative effect of added risk mitigation tasks, vehicle and engine development problems, and testing delays. Most of the cost increase (about $213 million) was for NASA-directed risk mitigation tasks initiated after both projects started. For example, in response to several project technical reviews and internal assessments of other NASA programs, the agency developed a restructure plan for the X- 34 project in June 2000. This plan included consolidating the vehicle and engine projects under one NASA manager. The project would be managed with the NASA project manager having the final decision-making authority; Orbital would be relegated to a more traditional subordinate contractor role. Under the plan, the contract with Orbital would also be rescoped to include only unpowered flights; Orbital would have to compete for 2nd Generation Program funding for all the powered flight tests. The plan’s additional risk mitigation activities would have increased the X-34 project’s funding requirements by an additional $139 million, which included about $45 million for additional engine testing and hardware; $33 million for an avionics redesign; $42 million for additional project management support and personnel; and $18 million to create a contingency reserve for future risk mitigation efforts. NASA is revising its acquisition and management approach for the 2nd Generation Program. Projects funded under the program will be NASA-led rather than industry-led. NASA also plans to increase the level of insight into the program’s projects, for example, by providing more formal reviews and varying levels of project documentation from contractors depending on the risk involved and the contract value. NASA also required that all proposals submitted in response to its research announcement be accompanied by certifiable cost and pricing data. Finally, NASA discouraged the use of cooperative agreements since these agreements did not prove to be effective contract vehicles for research and development efforts where large investments are required. While it is too early to tell if the agency measures aimed at avoiding the problems experienced in the X-33 and X-34 programs will be sufficient, these experiences show that three critical areas need to be addressed. These relate to (1) adequate project funding and cost risk provisions, (2) the effective and efficient coordination and communication required by many individual but related efforts, and (3) periodically revalidating underlying assumptions by measuring progress toward achieving a new safe, affordable space transportation system that meets NASA’s requirements. First, the technical complexity of the 2nd Generation Program requires that NASA develop realistic cost estimates and risk mitigation plans, and accordingly set aside enough funds to cover the program’s many projects. NASA plans to invest substantially more funds in the 2nd Generation Program than it did in the previous Reusable Launch Vehicle Program, and plans to provide reserves for mitigating program risk. For example, the agency plans to spend about $3.3 billion over 6 years to define system requirements for competing space transportation systems and related risk reduction activities. Most of this amount, about $3.1 billion, is for risk- reduction activities, such as the development of new lightweight composite structures, durable thermal protection systems, and new high performance engine components. NASA officials told us that an important way they plan to mitigate risk is by ensuring adequate management reserves in the 15- to 20-percent range, or higher if needed. They also acknowledged the need for adequate program cost estimates on which to base reserve requirements. However, we are still concerned about the timely preparation of cost estimates. The 2nd Generation deputy program manager stated that, based on the scope of the first contracts awarded, the program office planned to update their cost estimate this summer before NASA conducted a separate, independent technical review and cost estimate in September 2001. Thus, neither of these important analyses were completed prior to the first contract awards. We believe that until the program office completes it own updated cost estimate and NASA conducts an independent cost and technical review, a credible estimate of total program costs and the adequacy of planned reserves will not be available. Also, NASA is still in the process of developing the documentation required for the program, including a risk mitigation plan. NASA policy requires that key program documentation be finalized and approved prior to implementing a program. Second, NASA will face coordination and communication challenges in managing the 2nd Generation Program. As noted earlier, NASA recently awarded initial contracts for systems engineering and various risk reduction efforts to 22 different contractors. Yet to successfully carry out the program NASA must, early on, have coordinated insight into all of the space transportation architectures being proposed by these contractors and their related risk reduction activities. Clearly, this will be a significant challenge. The contractors proposing overall architecture designs must be aware of all the related risk reduction development activities affecting their respective designs. It may also prove difficult for contractors proposing space transportation system designs to coordinate work with other contractors without a prime contractor to subcontractor relationship. NASA’s own Aerospace Technology Advisory Committee, made up of outside experts, has also expressed serious concerns about the difficulty of integrating these efforts effectively. The need for improvement in coordination and communications in all NASA programs has been noted in the past and is not unique to the X-33 and X-34 programs. We and other NASA investigative teams have found and noted similar problems with other NASA programs such as the Propulsion Module for the International Space Station, and several other projects including the two failed Mars missions. NASA’s Space Launch Initiative Program would benefit from lessons learned from past mishaps. At the request of the House Science Committee, we are undertaking a review of NASA’s lessons learned process and procedures. The principal objectives of this review are to determine (1) how NASA captures and disseminates lessons learned and (2) if NASA is effectively applying lessons learned toward current programs and projects. We will report the results of our evaluation in December of this year. The third challenge is establishing performance measures that can accurately gauge the progress being made by NASA and its contractors. NASA officials told us that they plan to periodically reassess the assumptions underlying key program objectives to ensure that the rationale for developing specific technology applications merits continued support. They also told us that they were in the process of establishing such metrics to measure performance. Ensuring that the results from the 2nd Generation Program will support a future decision to develop reusable launch vehicles also deserves attention in NASA’s annual Performance Plan. The plan would be strengthened by recognizing the importance of clearly defined indicators which demonstrate that NASA is (1) on a path leading to an operational reusable launch vehicle and (2) making progress toward its objective of significantly reducing launch costs, and increasing safety and reliability compared to existing systems. Affected NASA Enterprise and Center performance plans would also be strengthened with the development of related metrics. Mr. Chairman, this concludes my statement. I would be happy to answer any questions you or other Members of the Subcommittee may have. We interviewed officials at NASA headquarters in Washington D.C., NASA’s Marshall Space Flight Center, Huntsville, Alabama, and at the NASA X-33 program office at Palmdale, California to (1) determine the primary program management factors that contributed to the difficulties experienced in the X-33 and X-34 programs, and (2) to identify steps that need to be taken to avoid repeating those problems within the Space Launch Initiative framework. We also talked to representatives of NASA’s Independent Program Assessment Office located at the Langley Research Center, Hampton, Virginia and the OIG located at NASA headquarters and Marshall Space Flight Center. At these various locations we obtained and analyzed key program, contractual and procurement documentation for the X-33, X-34 and 2nd Generation programs. Further, we reviewed reports issued by the NASA’s OIG and Independent Program Assessment Office pertaining to the management and execution of the X-33 and X-34 programs, and NASA Advisory Council minutes regarding NASA’s efforts to develop reusable launch vehicles. In addition, we reviewed other NASA internal reports documenting management issues associated with program formulation and implementation of other NASA programs. We also reviewed applicable NASA policy regarding how NASA expects its programs and projects to be implemented and managed. We conducted our review from August 2000 to June 2001 in accordance with generally accepted government auditing standards. | This testimony discusses the National Aeronautics and Space Administration's (NASA) X-33 and X-34 reusable launch vehicle programs. The two programs experienced difficulties achieving their goals primarily because NASA did not develop realistic cost estimates, timely acquisition and risk management plans, and adequate and realistic performance goals. In particular, neither program fully (1) assessed the costs associated with developing new, unproven technologies, (2) provided for the financial reserves needed to deal with technical risks and accommodate normal development delays, (3) developed plans to quantify and mitigate the risks to NASA, or (4) established performance targets showing a clear path leading to an operational reusable launch vehicle. As a result, both programs were terminated. Currently, NASA is in the process of taking steps in the Second Generation Reusable Launch Vehicle Program to help avoid problems like those encountered in the X-33 and X-34 programs. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
ERM allows management to understand an organization’s portfolio of top- risk exposures, which could affect the organization’s success in meeting its goal. As such, ERM is a decision-making tool that allows leadership to view risks from across an organization’s portfolio of responsibilities. ERM recognizes how risks interact (i.e., how one risk can magnify or offset another risk), and also examines the interaction of risk treatments (actions taken to address a risk), such as acceptance or avoidance. For example, treatment of one risk in one part of the organization can create a new risk elsewhere or can affect the effectiveness of the risk treatment applied to another risk. ERM is part of overall organizational governance and accountability functions and encompasses all areas where an organization is exposed to risk (financial, operational, reporting, compliance, governance, strategic, reputation, etc.). In July 2016, OMB updated its Circular No. A-123 guidance to establish management’s responsibilities for ERM, as well as updates to internal control in accordance with Standards for Internal Control in the Federal Government. OMB also updated Circular No. A-11, Preparation, Submission, and Execution of the Budget in 2016 and refers agencies to Circular No. A-123 for implementation requirements for ERM. Circular No. A-123 guides agencies on how to integrate organizational performance and ERM to yield an “enterprise-wide, strategically-aligned portfolio view of organizational challenges that provides better insight about how to most effectively prioritize resource allocations to ensure successful mission delivery.” The updated requirements in Circulars A- 123 and A-11 help modernize existing management efforts by requiring agencies to implement an ERM capability coordinated with the strategic planning and strategic review process established by the GPRA Modernization Act of 2010 (GPRAMA), and with the internal control processes required by the FMFIA and in our Standards for Internal Control in the Federal Government. This integrated governance structure is designed to improve mission delivery, reduce costs, and focus corrective actions towards key risks. More specifically, Circular No. A-123 discusses both internal control and ERM and how these fit together to manage agency risks. Our Standards for Internal Control in the Federal Government describes internal control as a process put in place by an entity’s oversight body, management, and other personnel that provides reasonable assurance that objectives related to operations, compliance, and reporting will be achieved, and serves as the first line of defense in safeguarding assets. Internal control is also part of ERM and used to manage or reduce risks in an organization. Prior to implementing ERM, risk management focused on traditional internal control concepts to managing risk exposures. Beyond traditional internal controls, ERM promotes risk management by considering its effect across the entire organization and how it may interact with other identified risks. Additionally, ERM also addresses other topics such as setting strategy, governance, communicating with stakeholders, and measuring performance, and its principles apply at all levels of the organization and across all functions. Implementation of OMB circulars is expected to engage all agency management, beyond the traditional ownership of A-123 by the Chief Financial Officer community. According to the A-123 Circular, it requires leadership from the agency Chief Operating Officer (COO) and Performance Improvement Officer (PIO) or other senior official with responsibility for the enterprise, and close collaboration across all agency mission and mission-support functions. The A-123 guidance also requires agencies to create a risk profile that helps them identify and assess risks arising from mission and mission-support operations, and consider those risks as part of the annual strategic review process. Circular A-123 requires that agencies’ risk profiles include risks to strategic, operations, reporting and compliance objectives. A federal interagency group of ERM practitioners developed a Playbook released through the Performance Improvement Council (PIC) and the Chief Financial Officers Council (CFOC) in July 2016 to provide federal agencies with a resource to support ERM. In particular, the Playbook assists them in implementing the required elements in the updated A-123 Circular. To assist agencies in better assessing challenges and opportunities from an enterprise-wide view, we have updated our risk management framework first published in 2005 to more fully include recent experience and guidance, as well as specific enterprise-wide elements. As mentioned previously, our 2005 risk management framework was developed in the context of risks associated with homeland security and combating terrorism. However, increased attention to ERM concepts and their applicability to all federal agencies and missions led us to revise our risk framework to incorporate ERM concepts that can help leaders better address uncertainties in the federal environment, changing and more complex operating environments due to technology and other global factors, the passage of GPRAMA and its focus on overall performance improvement, and stakeholders seeking greater transparency and accountability. For many similar reasons, the Committee of Sponsoring Organizations of the Treadway Commission (COSO) initiated an effort to update its ERM framework for 2016, and the International Organization for Standardization (ISO) plans to update its ERM framework in 2017. Further, as noted, OMB has now incorporated ERM into Circulars A-11 and A-123 to help improve overall agency performance. We identified six essential elements to assist federal agencies as they move forward with ERM implementation. Figure 1 below shows how ERM’s essential elements fit together to form a continuing process for managing enterprise risks. The absence of any one of the elements below would likely result in an agency incompletely identifying and managing enterprise risk. For example, if an agency did not monitor risks, then it would have no way to ensure that it had responded to risks successfully. There is no “one right” ERM framework that all organizations should adopt. However, agencies should include certain essential elements in their ERM program. Below we describe each essential element in more detail, why it is important, and some actions necessary to successfully build an ERM program. 1. Align the ERM process to agency goals and objectives. Ensure the ERM process maximizes the achievement of agency mission and results. Agency leaders examine strategic objectives by regularly considering how uncertainties, both risks and opportunities, could affect the agency’s ability to achieve its mission. ERM subject matter specialists confirmed that this element is critical because the ERM process should support the achievement of agency goals and objectives and provide value for the organization and its stakeholders. By aligning the ERM process to the agency mission, agency leaders can address risks via an enterprise-wide, strategically-aligned portfolio rather than addressing individual risks within silos. Thus, agency leaders can make better, more effective decisions when prioritizing risks and allocating resources to manage risks to mission delivery. While leadership is integral throughout the ERM process, it is an especially critical component of aligning ERM to agency goals and objectives because senior leaders have an active role in strategic planning and accountability for results. 2. Identify risks. Assemble a comprehensive list of risks, both threats and opportunities, that could affect the agency from achieving its goals and objectives. This element of ERM systematically identifies the sources of risks as they relate to strategic objectives by examining internal and external factors that could affect their accomplishment. It is important that risks either can be opportunities for, or threats to, accomplishing strategic objectives. The literature we reviewed, as well as subject matter specialists, pointed out that identifying risks in any organization is challenging for employees, as they may be concerned about reprisals for highlighting "bad news." Risks to objectives can often be grouped by type or category. For example, a number of risks may be grouped together in categories such as strategic, program, operational, reporting, reputational, technological, etc. Categorizing risks can help agency leaders see how risks relate and to what extent the sources of the risks are similar. The risks are linked to relevant strategic objectives and documented in a risk register or some other comprehensive format that also identifies the relevant source and a risk owner to manage the treatment of the risk. Comprehensive risk identification is critical even if the agency does not control the source of the risk. The literature and subject matter specialists we consulted told us that it is important to build a culture where all employees can effectively raise risks. It is also important for the risk owner to be the person who is most knowledgeable about the risk, as this person is likely to have the most insight about appropriate ways to treat the risk. 3. Assess risks. Examine risks considering both the likelihood of the risk and the impact of the risk on the mission to help prioritize risk response. Agency leaders, risk owners, and subject matter experts assess each risk by assigning the likelihood of the risk’s occurrence and the potential impact if the risk occurs. It is important to use the best information available to make the risk assessment as realistic as possible. Risk owners may be in the best position to assess risks. Risks are ranked based on organizational priorities in relation to strategic objectives. Risks are ranked based on organizational priorities in relation to strategic objectives. Agencies need to be familiar with the strengths of their internal control when assessing risks to determine whether the likelihood of a risk event is higher or lower based on the level of uncertainty within the existing control environment. Senior leaders determine if a risk requires treatment or not. Some identified risks may not require treatment at all because they fall within the agency's risk appetite, defined as how much risk the organization is willing to accept relative to mission achievement. The literature we reviewed and subject matter specialists noted that integrating ERM efforts with strategic planning and organizational performance management would help an organization more effectively assess its risks with respect to the impact on the mission. 4. Select risk response. Select a risk treatment response (based on risk appetite) including acceptance, avoidance, reduction, sharing, or transfer. Agency leaders review the prioritized list of risks and select the most appropriate treatment strategy to manage the risk. When selecting the risk response, subject matter experts noted that it is important to involve stakeholders that may also be affected, not only by the risk, but also by the risk treatment. Subject matter specialists also told us that when agencies discuss proposed risk treatments, they should also consider treatment costs and benefits. Not all treatment strategies manage the risk entirely; there may be some residual risk after the risk treatment is applied. Senior leaders need to decide if the residual risk is within their risk appetite and if additional treatment will be required. The risk response should also fit into the management structure, culture, and processes of the agency, so that ERM becomes an integral part of regular management functions. One subject matter specialist suggested that maximize opportunity should also be included as a risk treatment response, so that leaders may capture the positive outcomes or opportunities associated with some risks. 5. Monitor Risks. Monitor how risks are changing and if responses are successful. After implementing the risk response, agencies must monitor the risk to help ensure that the entire risk management process remains current and relevant. The literature we reviewed also suggests using a risk register or other comprehensive risk report to track the success of the treatment for managing the risk. Senior leaders and risk owners review the effectiveness of the selected risk treatment and change the risk response as necessary. Subject matter specialists noted that a good practice includes continuously monitoring and managing risks. Monitoring should be a planned part of the ERM process and can involve regular checking as part of management processes or part of a periodic risk review. Senior leaders also could use performance measures to help track the success of the treatment, and if it has had the desired effect on the mission. 6. Communicate and Report on Risks. Communicate risks with stakeholders and report on the status of addressing the risks. Communicating and reporting risk information informs agency stakeholders about the status of identified risks and their associated treatments, and assures them that agency leaders are managing risk effectively. In a federal setting, communicating risk is important because of the additional transparency expected by Congress, taxpayers, and other relevant stakeholders. Communicating risk information through a dedicated risk management report or integrating risk information into existing organizational performance management reports, such as the annual performance and accountability report, may be useful ways of sharing progress on the management of risk. The literature we reviewed showed and subject matter specialists confirmed that sharing risk information is a good practice. However, concerns may arise about sharing overly specific information or risk responses that would rely on sensitive information. Safeguards should be put in place to help secure information that requires careful management, such as information that could jeopardize security, safety, health, or fraud prevention efforts. In this case, agencies can help alleviate concerns by establishing safeguards, such as communicating risk information only to appropriate parties, encrypting sensitive data, authorizing users' level of rights and privileges, and providing information on a need-to-know basis. We identified six good practices that nine agencies are implementing that illustrate ERM’s essential elements. The selected good practices are not all inclusive, but represent steps that federal agencies can take to initiate and sustain an effective ERM process, as well as practices that can apply to more advanced agencies as their ERM processes mature. We expect that as federal experiences with ERM evolve, we will be able to refine these practices and identify additional ones. Below in table 1, we identify the essential elements of ERM and the good practices that support each particular element that agencies can use to support their ERM programs. The essential elements define what ERM is and the good practices and case illustrations described in more detail later in this report provide ways that agencies can effectively implement ERM. The good practices may fit with more than one essential element, but are shown in the table next to the element to which they most closely relate. The following examples illustrate how selected agencies are guiding and sustaining ERM strategy through leadership engagement. These include how they have: designated an ERM leader or leaders committed organization resources to support ERM, and set organizational risk appetite. This good practice relates most closely to Align ERM Process to Goals and Objectives as shown in table 1. According to the Chief Financial Officer’s Council (CFOC) and Performance Improvement Council (PIC) Playbook, strong leadership at the top of the organization, including active participation in oversight, is extremely important for achieving success in an ERM program. To manage ERM activities, leadership may choose to designate a Chief Risk Officer (CRO) or other risk champion to demonstrate the importance of risk management to the agency and to implement and manage an effective ERM process across the agency. The CRO role includes leading the ERM process; involving those that need to participate and holding them accountable; ensuring that ERM reviews take place regularly; obtaining resources, such as data and staff support if needed; and ensuring that risks are communicated appropriately to internal and external stakeholders, among other things. For example, at TSA, the CRO serves as the principal advisor on all risks that could affect TSA’s ability to perform its mission, according to the August 2014 TSA ERM Policy Manual. The CRO reports directly to the TSA Administrator and the Deputy Administrator. In conjunction with the Executive Risk Steering Committee (ERSC) composed of Assistant Administrators who lead TSA’s program and management offices, the CRO leads TSA in conducting regular enterprise risk assessments of TSA business processes or programs, and overseeing processes that identify, assess, prioritize, respond to, and monitor enterprise risks. Specifically, the August 2014 TSA ERM Policy Manual describes ERSC’s role to “oversee the development and implementation of processes used to analyze, prioritize, and address risks across the agency including terrorism threats facing the transportation sector, along with non- operational risks that could impede its ability to achieve its strategic objectives.” The TSA CRO told us that its ERSC provides an opportunity for all Assistant Administrators to get together to have risk conversations. For example, the CRO recently recommended that the ERSC add implementation of the agency’s new financial management system to the risk register. According to the CRO, the system’s implementation was viewed as the responsibility of the Chief Financial Officer (CFO) and Chief Information Officer (CIO). However, the implementation needed to be managed at the enterprise-level because if it was not successfully implemented, the entire enterprise would be affected. The CRO proposed adding the implementation of the new financial management system to the TSA risk register to give the issue broader visibility. The ERSC unanimously concurred with the recommendation, and staff from the Office of Finance and Administration—the risk owner—will brief the ERSC periodically on the status of the effort. According to TSA’s ERM Policy Manual, the CRO leads the overall ERM process, while the ERSC brings knowledge and expertise from their individual organizations to help identify and prioritize risks and opportunities of TSA’s overall approach to operations. While the CRO and ERSC play critical roles in ERM oversight, the relevant program offices still own risks and execute risk management, according to the TSA ERM Policy Manual. To launch and sustain a successful ERM program, organizational resources are needed to help implement leadership’s vision of ERM for the agency and ensure its ongoing effectiveness. For example, when FSA began its ERM program in 2004, the Chief Operating Officer (COO) decided to hire a CRO and give him full responsibility to establish the ERM organization and program and implement it across the organization. According to documents we reviewed, the CRO dedicated resources to define the goal and purpose of the ERM program and met with key leaders across the agency to socialize the program. Agency leadership hired staff to establish the ERM program and provided risk management training to business unit senior leaders and their respective staff. Our review of documents shows that the FSA continues to provide ERM training to senior staff and all FSA employees and also participates in an annual FSA Day, so employees can learn more about all business units across FSA including the Risk Management Office and its ERM implementation. In September 2016, the FSA CRO told us that the Risk Management Office had a staff of 19 full-time equivalent (FTE) employees. FSA continues to provide resources to its ERM program and has subsequently structured its leadership by involving two senior leaders and a risk management committee to manage ERM processes. According to the CRO, its risk committee guides the ERM process, tracks the agency’s progress in managing risks, and increases accountability for outcomes. Both the CRO, the Chairman of the Risk Management Committee and the Senior Risk Advisor report directly to the FSA Chief Operating Officer (COO). The CRO manages the day-to-day aspects of assessing risks for various internal FSA operations, programs and initiatives, as well as targeting risk assessments on specific high-risk issues, such as the closing of a large for-profit school. The Chairman of the Risk Management Committee and the Senior Risk Advisor advise the COO by identifying and analyzing external risks that could affect the accomplishment of FSA’s strategic objectives. The Senior Risk Advisor also gathers and disseminates information internally that relates to FSA risk issues, such as cybersecurity or financial issues. In addition, he serves as the Chair of the Risk Management Committee and leads its monthly meetings. Other senior leaders and members involved with the Risk Management Committee were drawn from across the agency and demonstrate the importance of ERM to FSA. Specifically, the committee is chaired by the independent senior risk advisor and comprised of the CRO, COO, CFO, Chief Information Officer (CIO), General Manager of Acquisitions, Chief Business Operations Officer, Chief of Staff, Chief Compliance Officer, Deputy COO, and Chief Customer Experience Officer, and meets monthly. Agency officials said that the participation of the COO, along with that of the other functional chiefs, indicates ERM’s importance and the commitment of staff—namely these executives–in the effort. Developing an agency risk appetite requires leadership involvement and discussion. The organization should develop a risk appetite statement and embed it in policies, procedures, decision limits, training, and communication, so that it is widely understood and used by the agency. Further, the risk appetite may vary for different activities depending on the expected value to the organization and its stakeholders. To that end, the National Institute of Standards and Technology (NIST) ERM Office surveyed its 33-member senior leadership team to measure risk appetite among its senior leaders. Without a clearly defined risk appetite, NIST could be taking risks well beyond management’s comfort level, or passing up strategic opportunities by assuming its leaders were risk averse. The survey objectives were to “assess management familiarity and use of risk management principles in day-to-day operations and to solicit management perspectives and input on risk appetite, including their opinions on critical thresholds that will inform the NIST enterprise risk criteria.” Survey questions focused on the respondent’s self-reported understanding of a variety of risk management concepts and asked respondents to rate how they consider risk with respect to management, safety, and security. The survey assessed officials’ risk appetite across five areas: NIST Goal Areas, Strategic Objectives, Core Products and Services, Mission Support Functions, and Core Values. See figure 2 for the rating scale that NIST used to assess officials’ appetite for risk in these areas. The survey results revealed a disconnect between the existing and desired risk appetite for mission support functions. According to NIST officials, respondents believed the bureau needed to accept more risk to allow for innovation within mission support functions. According to agency officials, to better align risk appetite with mission needs, the NIST Director tasked the leadership team with developing risk appetite levels for those areas with the greatest disagreement between the existing and desired risk appetite, while still remaining compliant with laws and regulations. Agency officials told us the NIST ERM Office plans to address this topic via further engagement with senior managers and subject matter experts. The following examples illustrate how selected agencies are developing a risk-informed culture, including how they have: encouraged employees to discuss risks openly, trained employees on ERM approach engaged employees in ERM efforts, and customized ERM tools for organizational mission and culture. This good practice relates most closely to Identify Risks, one of the Essential Elements of Federal Government ERM shown in table 1. Successful ERM programs find ways to develop an organizational culture that allows employees to openly discuss and identify risks, as well as potential opportunities to enhance organizational goals or value. The CFOC and PIC Playbook also supports this notion that once ERM is built into the agency culture, the agency can learn from managed risks or, near misses, using them to improve how it identifies and analyzes risk. For example, Commerce officials sought to embed a culture of risk awareness across the department by defining cascading roles of leadership and responsibility for ERM across the department and for its 12 bureaus. Additionally, an official noted that Commerce leveraged this forum to share bureau best practices; develop a common risk lexicon; and address cross-bureau risks, issues and concerns regarding ERM practice and implementation. According to the updated ERM program policy, these roles should support the ERM program and promote a risk management culture. They also help promote transparency, oversight, and accountability for a successful ERM program. Table 2 shows the ERM roles and set of responsibilities within Commerce and how they support a culture of risk awareness at each level. To successfully implement and sustain ERM, it is critical that staff, at all levels, understand how the organization defines ERM, its subsequent ERM approach, and organizational expectations on their involvement in the ERM process. The CFOC and PIC Playbook also supports risk awareness as previously stated because once ERM is built into the agency culture, it can be possible to learn from managed risks and near misses when risks materialize, and then used to improve the process of identifying and analyzing risk in the future. Further, the Playbook suggests that this culture change can only occur if top agency leaders champion ERM and encourage the flow of information needed for effective decision making. For example, to promote cultural change and encourage employees to raise risks, PIH trained about half of its 1,500 employees in 2015. Agency officials told us that they plan to expand on the 2015 training and provide training to all PIH employees after 2016. The in-person PIH training includes several features of our identified ERM good practices, such as leadership support and the importance of developing a risk-informed culture. For example, the Principal Deputy Assistant Secretary for PIH was visibly involved in the training and kicked off the first of the five training modules using a video emphasizing ERM. The training contained discussions and specific exercises dedicated to the importance of raising and assessing risks and understanding the leadership and employee roles in ERM. The first training module emphasized the factors that can support ERM by highlighting the following cultural characteristics. ERM requires a culture that supports the reporting of risks. ERM requires a culture of open feedback. A risk-aware culture enables all HUD staff to speak up and then be listened to by decision-makers. Leadership encourages the sharing of risks. By focusing on the importance of developing a risk aware culture in the first ERM training module, PIH officials emphasized that ERM requires a cultural transformation for its success. To enable all employees to participate and benefit from the training, PIH officials recorded the modules and made them available on You-Tube. Our literature review found that building a risk-aware culture supports the ERM process by encouraging staff across the organization to feel comfortable raising risks. Involving employees in identifying risks allows the agency to increase risk awareness and generate solutions to those identified risks. Some ways to strengthen this culture include the presence of risk management communities of practice, the development and dissemination of a risk lexicon agencywide, and conducting forums that enable frontline staff to raise risk-related strategic or operational concerns with leadership and senior management. For example, TSA’s Office of the Chief Risk Officer (OCRO) has sponsored a number of activities related to raising risk awareness. First, TSA has established a risk community of interest open to any employee in the organization, and has hosted speakers on ERM topics. These meetings have provided an opportunity for employees across the administration to learn and discuss risks and become more knowledgeable about the types of issues that should be raised to management. Second, TSA created a risk lexicon, so that all staff involved with ERM would use and understand risk terminology similarly. The lexicon describes core concepts and terms that form the basis for the TSA ERM framework. TSA incorporated the ERM lexicon into the TSA ERM Policy Manual. Third, in January 2016, TSA started a vulnerability management process for offices and functions with responsibility for identifying or addressing security vulnerabilities. Officials told us that this new process is intended to help raise risks from the bottom up so that they receive top level monitoring. According to the December 2015 TSA memo we reviewed, the process centralizes tracking of vulnerability mitigation efforts with the CRO, creates a central repository for vulnerability information and tracking, provides executive engagement and oversight of enterprise vulnerabilities by the Executive Risk Steering Committee (ERSC), promotes cross-functional collaboration across TSA offices, and requires the collaboration of Assistant Administrators and their respective staff across the Agency. See figure 3 below for an overview of how TSA’s vulnerability management process is intended to work. The CRO told us that employees from all levels can report risks with broader, enterprise-level application to the OCRO. Once the OCRO decides the risks are at an enterprise level, the office assembles a working group and submits ideas to the ERSC to decide at what level it should be addressed. The risk is then assigned to an executive who will be required to provide a status update. Fourth, officials in the TSA OCRO told us that TSA has established points of contact in every program office, referred to as ERM Liaisons. Each ERM Liaison is a senior level official that represents their program office in all ERM related activities. TSA also implemented risk management awareness training to headquarters and field supervisors that covered topics such as risk-based decision-making, risk assessment, and situational awareness. Officials told us they are also embedding ERM principles into existing training, so that employees will understand how ERM fits into TSA operations. Customizing ERM tools and templates can help ensure risk management efforts fit agency culture and operations. For example, NIST tailored certain elements of the Commerce ERM framework to better reflect the bureau’s risk thresholds. Commerce has developed a set of standard risk assessment criteria to help identify and rate risks, referred to as the Commerce ERM Reference Card. NIST officials reported that some of the safety and security terms used at Commerce differed from the terms used at NIST and required tailoring to map to NIST’s existing safety risk framework, which is a heavily embedded component of NIST operations and culture. To better align to NIST, the NIST ERM Program split safety and security risks into distinct categories when establishing a tailored ERM framework for the bureau (see table 3). According to agency officials, the NIST ERM Reference Card also leverages American National Standards Institute guidelines, so it does not introduce another separate and potentially conflicting set of terms. Officials told us that these adaptations to the NIST ERM framework help maintain continuity with the Commerce framework, but reflect the particular mission, needs, and culture of NIST. The following examples illustrate how selected agencies are integrating ERM capability to support strategic planning and organizational performance management. These include how they have: incorporated ERM into strategic planning processes, and used ERM to improve information for agency decisions. This good practice most closely relates to Assess Risks, one of the Essential Elements of Federal Government ERM, shown in table 1. Through ERM, an agency looks for opportunities that may arise out of specific situations, assesses their risk, and develops strategies to achieve positive outcomes. In the federal environment, agencies can leverage the GPRAMA performance planning and reporting framework to help better manage risks and improve decision making. For example, Treasury has integrated ERM into its existing strategic planning and management processes. According to our review of the literature and the subject matter specialists we interviewed, using existing processes helps to avoid creating overlapping processes. Further, by incorporating ERM this way, risk management becomes an integral part of setting goals, including agency priority goals (APG), ultimately achieving an organization’s desired outcomes. Agencies can use regular performance reviews, such as the quarterly performance reviews of APGs and annual leadership- driven strategic objective review, to help increase attention on progress towards the outcomes agencies are trying to achieve. According to OMB Circular No. A-11, agencies are expected to manage risks and challenges related to delivering the organization’s mission. The agency’s strategic review is a process by which the agency should coordinate its analysis of risk using ERM to make risk-aware decisions, including the development of risk profiles as a component of the annual strategic review, identifying risks arising from mission and mission-support operations, and providing a thoughtful analysis of the risks an agency faces towards achieving its strategic objectives. Instituting ERM can help agency leaders make risk- aware decisions that affect prioritization, performance, and resource allocation. Treasury officials stated they integrated ERM into their quarterly performance or data-driven reviews and strategic reviews, both of which already existed. Officials stated this action has helped elevate and focus risk discussions. Staff from the management office and individual bureaus work together to complete the template slide, which is used to include a risk element in their performance reviews. As part of this process, they are assessing risk. See figure 4 for how risk is incorporated into Treasury’s quarterly performance review (QPR) template. Officials stated that they believe this approach to prepare for the data-driven review has helped improve outcomes at Treasury. For example, according to agency officials, Treasury used its QPR process to increase cybersecurity. Treasury officials also told us that during the fall and the spring, each Treasury bureau completes the data-driven review templates. Agency officials are to use the summer data-driven review as an opportunity to discuss budget formulation. In winter, they are to use the annual data- driven review to show progress towards achieving strategic objectives. According to agency officials, the strategic review examines and assesses risks identified as part of the data-driven reviews and aggregates and analyzes these results at the cross cutting strategic objective level, which helps improve agency performance. Integrating ERM into this existing data-driven review process avoids creating a duplicative process and increases the focus on risk. In another example, Treasury officials identified implementation of the Digital Accountability and Transparency Act of 2014 (DATA Act) both at Treasury and government-wide as a risk and established “Financial Transparency” as one of its two APGs for fiscal years 2016 and 2017. According to agency officials, incorporating risk management into the data-driven review process sends a signal about the importance of the DATA Act and brings additional leadership focus and scrutiny needed to successfully implement the law. The literature we reviewed notes that ERM contributes to leaders’ ability to identify risks and adjust organizational priorities to enhance decision- making efforts. For example, OPM has a Risk Management Council (RMC) that builds risk-review reporting and management strategies into existing decision making and performance management structures. This includes Performance Dashboards, APG reviews, and regular meetings of the senior management team, as is recommended by the CFOC and PIC Playbook. The RMC also uses an existing performance dashboard for strategic goal reviews as part of its ERM process and to help inform decisions as a result of these reviews. Officials told us they present their dashboards every 6 or 7 weeks to the Chief Management Officer (CMO) and RMC, as part of preparing for their data-driven reviews. Each project and its risks are mapped against the strategic plan. When officials responsible for a goal identify risks, they must also provide action plan strategies, timelines, and milestones for mitigating risks. Figure 5 shows an OPM dashboard to illustrate how OPM tracks progress on a goal of preparing the federal workforce for retirement, for such a risk as an unexpected retirement surge, and documents mitigation strategies to address such events. According to agency officials, the CMO and RMC monitor high level and high visibility risks on a weekly basis. In August 2016, OPM officials told us they were monitoring five to seven major projects, such as information technology (IT) security implementation and retirement services processes. Each quarterly data-driven review includes an in-depth look into a specific goal and the examination of risks as part of the review. Officials told us that in the past 3 years, they have covered each of the strategic goals using the dashboard. According to officials, during one of these reviews, OPM identified a new risk related to having sufficient qualified contracting staff to meet the goal of effective and efficient IT systems. Since OPM considers contracting a significant component of that goal, they decided to create the Office of Procurement Operations to help increase attention to contracting staff. Using ERM, OPM officials told us they believe that they could better prioritize funding requests across the agency, ultimately balance limited resources, and make better informed decisions. The following examples illustrate how selected agencies are establishing a customized ERM program into existing agency processes. These include how they have: designed an ERM program that allows for customized agency fit, developed a consistent, routinized ERM program, and used a maturity model approach to build an ERM program. This good practice relates primarily to Identify Risk and Select Risk Response, two of the Essential Elements of Federal Government ERM shown in table 1. Effective ERM implementation starts with agencies establishing a customized ERM program that fits their specific organizational mission, culture, operating environment, and business processes but also contains the essential elements of an ERM framework. The CFOC and PIC Playbook focuses on the importance of a customized ERM program to meet agency needs. This involves taking into account policy concerns, mission needs, stakeholder interests and priorities, agency culture, and the acceptable level for each risk, both for the agency as a whole and for the specific programs. For example, in 2004, the Department of Education’s (Education) Office of Federal Student Aid (FSA) began establishing a formal ERM program, based on the Committee of Sponsoring Organizations of the Treadway Commission (COSO) ERM Framework, to help address longstanding risks using customized implementation plans. More specifically, FSA’s framework and materials were customized for it to ensure that they were specific to work within a government setting, and to capture the nuances of FSA's business model. Agency officials told us that one reason they adopted a COSO-based model for ERM is that it was geared toward achieving an entity's objectives, and could be customized to meet FSA’s organizational needs as a performance-based organization. Thus, FSA adopted a three-phase approach that allowed for increased maturity over time, and customized it to help the organization adapt to the new program using a COSO-based methodology for risk management. According to FSA documents, the first phase involved creating the ERM organization, designing a high-level implementation plan, and forming its enterprise risk committee to help support its first ERM efforts. The second phase involved creating a strategic plan and detailed project plan to implement ERM. For example, the original FSA ERM Strategic Plan contained an ERM vision statement (see textbox below) for aligning strategic risks with goals and objectives. The FSA Plan also provided its approach for identifying risks that could affect FSA’s ability to achieve these objectives. Federal Student Aid Enterprise Risk Management Original Vision Statement “Our vision is to create the premier Enterprise Risk Management Program in the Federal government. One that provides for an integrated view of risk across the entire Federal Student Aid organization; aligns strategic risks with the organization’s goals and objectives; ensures that risk issues are integrated into the strategic decision making process; and manages risk to further the achievement of performance goals.” During the initial implementation of FSA's ERM program, the ERM strategic goals were to: 1. provide for an integrated view of risks across the organization, 2. ensure that strategic risks are aligned with strategic goals and 3. develop a progressive risk culture that fosters an increased focus on risk and awareness of related issues throughout the organization, and 4. improve the quality and availability of risk information across all levels of the organization, especially for executive management. Finally, according to documents we reviewed, the third phase of FSA’s ERM implementation included developing enterprise-level risk reports, and advanced methods, tools, and techniques to monitor and manage risk. For example, the documents we reviewed showed that some of the key tools that supported FSA’s ERM implementation included ERM terminology, risk categories, risk ratings, and a risk-tracking system. These tools help FSA select an appropriate risk response that works with existing agency processes and culture. A consistent process for risk review that systematically categorizes risk helps leaders to ensure that the consideration of potential risk takes place. The CFOC and PIC Playbook suggests that organizations define risk categories to support their business processes, and use these categories consistently. For example, to identify and review risks, the TSA Risk taxonomy organizes risks into categories so the agency can consistently identify, assess, measure, and monitor risks across the organization, as discussed in the TSA Policy Manual. The TSA Risk Taxonomy captures the risks in all aspects of mission operations, business operations, governance, and information. Figure 6 lists each risk category that is reviewed. The taxonomy helps TSA both collect risks and identify the most critical, and helps ensure that the same vocabulary and categorization system are used across TSA. Officials report that they chose these categories to help break down organizational silos and help identify all types of risks. For example, they did not want “mission risk” to consider only the Federal Air Marshal Service and airport checkpoint screening. Rather, they wanted a broad understanding of risks across the various TSA components. TSA officials stated that they believe the taxonomy will be even more useful when TSA has an automated computer application to help analyze all similar and related risks across the enterprise. OMB is encouraging agencies to use a maturity model approach in the ERM guidance provided in A-123. Results from our literature review and OMB suggested that a maturity model allows the organization to plan for continued agency improvement as its efforts mature. For example, to assist implementing a department-wide ERM process, Commerce developed an ERM Maturity Assessment Tool (EMAT), as well as a comprehensive guidebook and other tools, to share with its 12 bureaus. The EMAT consists of 83 questions to help bureaus determine their ERM maturity (see figure 7 for a sample of EMAT questions). According to agency officials, bureaus are required to conduct EMAT assessments annually. According to agency officials, while the EMAT lays out the basic components of ERM, the bureaus may customize the tool to fit their respective organizations. Commerce expects the bureaus to demonstrate increased levels of maturity over time. Agency officials reported that overall, the level of maturity has increased since the program began. Discussions of the EMAT have allowed bureaus to learn from each other and identify strategies for addressing common challenges. According to officials, these challenges include documenting risk treatment plans and providing the rationale to support management’s risk mitigation choices. The following example illustrates how a select agency is continuously managing risks including how it has: tracked and monitored current and emerging risks. This good practice most closely relates to Monitor Risks, one of the Essential Elements of Federal Government ERM shown in table 1. Continuously managing risk requires a systematic or routine risk review function to help senior leaders and other stakeholders accomplish the organizational mission. The CFOC and PIC Playbook recommends that risks be identified and assessed throughout the year as part of a regular process, including surveillance of leading risk indicators both internally and externally. For example, PIH has two risk management dashboards, which it uses to monitor and review risks. The Risk and Mitigation Strategies Dashboard shown in figure 8, according to PIH officials, helps them monitor risks and mitigation actions that PIH is actively pursuing. Officials told us that the risk division prepares and presents this dashboard to the Risk Committee quarterly. The dashboard provides a snapshot view for the current period, analysis of mitigation action to date, and trends for the projected risk. It tracks the highest level risks to PIH as determined by the Risk Committee, along with the corresponding mitigation plans. Currently, officials told us PIH is managing the top risks using the dashboard. Risk division staff continually update the dashboard to concisely display the status of both risk and mitigation efforts. The second dashboard in figure 9, Key Risk Indicators Dashboard, monitors external, future risks to PIH’s mission. Agency officials told us that the dashboard is used as an early-warning system for emerging risks, which the Risk Committee must address before the next annual risk assessment cycle begins. The dashboard includes a risk-level column that documents the residual risk, and is measured on a five-point scale with one being the lowest and five being the highest, which is assigned by the relevant Deputy Assistant Secretary and Risk Division staff. A trending column indicates whether the risk is projected to increase, decrease, or remain the same. There is also a link to a document that summarizes the risk assessment including the risks and measures to address the risk and anticipated impact. The Risk Committee reviews the dashboard as needed, but not less than quarterly. These two dashboards show how an agency uses the continuous risk review cycle. The cycle allows leaders to treat risks until they are satisfied the risk is under control or successfully treated or managed. The following examples illustrate how selected agencies are sharing information with internal and external stakeholders to identify and communicate risks. These include how they have: incorporated feedback on risks from internal and external stakeholders to better manage risks, and shared risk information across the enterprise. This good practice most closely relates to Communicate and Report on Risks in the Essential Elements of Federal Government ERM shown in table 1. Effective information and communication are vital for an agency to achieve its objectives and this often involves multiple stakeholders, inside and outside the organization. ERM programs should incorporate feedback from internal and external stakeholders because their respective insights can help organizations identify and better manage risks. For example, the National Oceanic and Atmospheric Administration (NOAA) and the National Aeronautics and Space Administration (NASA) are creating and sharing inter-agency risk information as part of their joint management of the Joint Polar Satellite System (JPSS) program. JPSS is a collaborative effort between NOAA and NASA; the program was created with the President’s Fiscal Year 2011 Budget Request to acquire, develop, launch, operate and sustain three polar-orbiting satellites. The purpose of the JPSS program is to replace aging polar satellites and provide critical environmental data used in forecasting such weather events as the path and intensity of a hurricane and measuring climate variations. These two agencies have a signed agreement, or memorandum of understanding, to share ownership for risk that details the responsibilities for delivering the satellite and overall cost and schedule performance. In particular, NOAA has overall responsibility for the cost and schedule of the program, as well as the entire JPSS program. NOAA manages the ground segment elements needed for data collection and distribution, while NASA manages the system acquisition, engineering, and integration of the satellite, as well as the JPSS Common Ground System. Because of this management arrangement, the Joint Polar Satellite System (JPSS) program also required “joint” risk tracking and management. Other program documentation also points to the agencies’ close collaboration on risk management. The March 2014, JPSS Risk Management Plan describes how risk management practices are planned for consistency with NASA’s risk management requirements and outlines roles and responsibilities. NOAA officials stated that they share programmatic and technical information across the two agencies, and that certain high-level risks are elevated through Commerce quarterly. Our review of meeting agendas and presentations show that NASA and NOAA officials met monthly as part of a NOAA held Agency Program Management Council (APMC) to track JPSS’s progress and that of other satellite programs. These meetings also allowed participants to discuss and approve courses of action for top program risks. During the APMC meetings, the JPSS program director presented status updates and other information including risks. Participants discussed risks, cost, performance, schedule, and other relevant issues for each program. Sharing information helps promote trust within and outside of the organization, increases accountability for managing risks, and helps stakeholders understand the basis for identified risks and resulting treatment plans. Further, internal and external stakeholders may be able to provide new expertise and insight that can help organizations identify and better manage risks. Both the NASA Program Managers and the NOAA Program Director or their representatives attend meetings to discuss potential issues, according to NOAA officials. Each major satellite program also has an independent Standing Review Board. At defined program/project milestones, the Standing Review Board reviews relevant data and writes up its conclusions, presents an independent review of the program/project, and highlights key risks to the convening authorities. NOAA officials said that having a joint risk-sharing process established for JPSS and other joint programs allows them to elevate risks both internally up through the agency, and externally, more quickly and efficiently. For example, for another satellite program, NOAA had to reschedule its launch date due to a problem that arose with the launch service provider. After it became clear that the program was going to miss its schedule baseline, it was elevated up through NOAA. According to NOAA, NASA officials then explained to the APMC the steps they were taking to address the risk. As a result of having a process to elevate the risk, NOAA was able to discuss risks associated with the launch vehicle and how it planned to proceed with a new launch date range. According to NOAA officials, because the APMC discussion developed joint information, this information was available to pass on more quickly to Congress. When discussing potential risks, gathering input from across an enterprise helps to ensure decisions work for all agency groups affected. It also gives groups an opportunity to share any concerns or ideas that can improve outcomes. Appropriate and timely sharing of information within an organization ensures that risk information remains relevant, useful, and current. The CFOC and PIC Playbook also notes that informed decision making requires the flow of information regarding risks and clarity about uncertainties or ambiguities—up and down the hierarchy and across silos—to the relevant decision makers so they can make informed decisions. For example, IRS uses the Risk Acceptance Form and Tool (RAFT) as shown in figure 10 to document business decisions within a consistent framework. As part of the RAFT development process, IRS considers the views of internal and external stakeholders. According to agency officials, the RAFT assists IRS business units in making better risk-based decisions and elevating risks to the appropriate level. IRS officials said the RAFT also encourages units to consider how decisions may affect other units, as well as external stakeholders. As a result, business units often collaborate on key decisions by completing the RAFT, including considering and documenting risks associated with those decisions. According to IRS officials, the RAFT is used as a guide to articulate rationales behind decisions within the context of risk appetite and serves as a documentation trail to support these business decisions. IRS agency officials told us that one goal of its ERM program is to look at risk across the enterprise rather than taking a narrow approach to risk management. This also applies when making risk informed decisions, such as those that would be documented on a RAFT. As such, the RAFT includes the following instructions: “If the decision impacts or involves multiple organizations, coordinate with the respective points-of-contact to ensure all relevant information regarding the risk(s) are addressed in each section.” The form also allows users to identify other business units involved in the decision and external stakeholders affected by the decision. We provided a draft of this report to Office of Management and Budget (OMB) and the 24 Chief Financial Officer (CFO) Act agencies for review and comment. OMB staff provided us with oral comments and stated they generally agreed with the essential elements and good practices as identified in this report. They also provided technical comments that we incorporated as appropriate. We received written responses from the Social Security Administration (SSA) and Department of Veterans Affairs (VA) reprinted in appendices II and III. The SSA and the VA neither agreed nor disagreed with our findings. However, VA mentioned that enterprise risk management should be monitored at a minimum as part of the quarterly reviews of Agency Priority Goals because of the high-level audience led by the Deputy Secretary and suggested that monitoring risks more frequently should be emphasized as a practice that most agencies should follow, among other things. SSA stated that they are adopting the good practices identified in the report. Of the remaining 22 CFO Act agencies, we received technical comments from 10 agencies, which we incorporated as appropriate, 10 had no comments, and two others did not respond. We are sending copies of this report to the Director of OMB as well as appropriate congressional committees and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-6806 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. In addition to the individual named above, William M. Reinsberg, Assistant Director, Carole J. Cimitile, Analyst-in-Charge, Shea Bader, Crystal Bernard, Amy Bowser, Alexandra Edwards, Ellen Grady, Erin E. Kennedy, Erik Kjeldgaard, Robert Gebhart, Sharon Miller, Anthony Patterson, Laurel Plume, Robert Robinson, Cynthia Saunders, Stewart W. Small, Katherine Wulff, and Jessica L. Yutzy made major contributions to this report. | Federal leaders are responsible for managing complex and risky missions. ERM is a way to assist agencies with managing risk across the organization. In July 2016, the Office of Management and Budget (OMB) issued an updated circular requiring federal agencies to implement ERM to ensure federal managers are effectively managing risks that could affect the achievement of agency strategic objectives. GAO's objectives were to (1) update its risk management framework to more fully include evolving requirements and essential elements for federal enterprise risk management, and (2) identify good practices that selected agencies have taken that illustrate those essential elements. GAO reviewed literature to identify good ERM practices that generally aligned with the essential elements and validated these with subject matter specialists. GAO also interviewed officials representing the 24 Chief Financial Officer (CFO) Act agencies about ERM activities and reviewed documentation where available to corroborate officials' statements. GAO studied agencies' practices using ERM and selected examples that best illustrated the essential elements and good practices of ERM. GAO provided a draft of this report to OMB and the 24 CFO Act agencies for review and comment. OMB generally agreed with the report. Of the CFO act agencies, 12 provided technical comments, which GAO included as appropriate; the others did not provide any comments. Enterprise Risk Management (ERM) is a forward-looking management approach that allows agencies to assess threats and opportunities that could affect the achievement of its goals. While there are a number of different frameworks for ERM, the figure below lists essential elements for an agency to carry out ERM effectively. GAO reviewed its risk management framework and incorporated changes to better address recent and emerging federal experience with ERM and identify the essential elements of ERM as shown below. GAO has identified six good practices to use when implementing ERM. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
In fiscal year 1997, Congress allocated $90 million for MAP to support the overseas promotion of agricultural goods such as grains, cotton, forest products, fruits, nuts, seafood, meat, alcoholic beverages, and processed goods. (See fig. I.1 in app. I for MAP appropriations since fiscal year 1986.) During fiscal year 1997, FAS provided MAP funds directly to 65 participating organizations consisting of 54 nonprofit agricultural trade associations, 5 nonprofit state regional groups, 2 state agencies, and 4 agricultural cooperatives. (See app. I, table I.1, for a list of fiscal year 1997 MAP participating organizations and their budgets). MAP funds can be used to support both generic promotions and brand-name promotions. In fiscal year 1997, about 76 percent of MAP’s budget supported generic promotions, with the remaining funds supporting brand-name promotions. Generic promotions are undertaken by nonprofit trade associations, state regional groups, and state agencies to increase demand for a specific commodity with no emphasis on a particular brand, for example, U.S. peas and lentils, catfish, and cotton. Brand-name promotions, on the other hand, are conducted by companies and cooperatives to establish consumer loyalty for their brand-name products. Trade associations and others using MAP funds to support generic promotions must contribute at least 10 percent of the promotion cost; entities using MAP funds to support brand-name promotions must make a minimum 50 percent contribution. In order to receive MAP funds, participating organizations must submit, and FAS must approve, marketing plans specifically describing the manner in which MAP assistance will be expended. Under these plans, the MAP funds may be spent by participating organizations themselves (direct) and/or redistributed to entities that have applied to participating organizations for MAP assistance (indirect). In fiscal year 1997, there were 453 individual companies and 20 cooperatives that indirectly received assistance for brand-name promotions. These companies and cooperatives applied for MAP funds through 19 participating organizations. Eligible MAP expenses include production and distribution of advertising and promotional materials (for example, posters, recipes, and brochures); in-store and food service promotions; product demonstrations; and fees for participation in exhibits. Funds used to support generic promotions may only be spent on the generic aspects of a campaign rather than on any promotional material or advertising that specifies a single company or brand. MAP supported generic and brand-name promotions in 100 countries during fiscal year 1997; 10 country markets accounted for 65 percent of the funds (see app. I, fig. I.2, for top country markets). With regard to the MAP brand-name program, a total of 475 companies and cooperatives received assistance in fiscal year 1997. The amount of MAP funds awarded to each ranged from $1,500 to $2.6 million; however, almost half of the awards were in amounts less than $25,000 (see app. I, table I.2, for size of fiscal year 1997 MAP awards). FAS’ Strategic Plan 1997-2002 contains estimates of the economic impact of FAS foreign market promotion programs, including MAP. This plan fulfills the requirement established under the Government Performance and Results Act of 1993 (Results Act) (P.L. 103-62) whereby federal agencies must prepare strategic and annual performance plans covering the program activities set out in the agencies’ budgets. For purposes of conducting cost/benefit analyses of government programs, OMB has established guidelines. Since fiscal year 1994, FAS has significantly increased the number of small businesses participating in MAP’s brand-name program as well as their share of MAP funds. In fiscal year 1996, as required by statute, FAS discontinued providing direct assistance to large businesses other than cooperatives and certain associations, which by law are eligible to receive this assistance for brand-name promotions regardless of size. In fiscal year 1998, FAS eliminated MAP funding for brand-name promotions by large companies entirely, by prohibiting their indirect participation in the brand-name program. Congress enacted legislation in 1993 directing FAS to give priority to small businesses when allocating MAP funds for brand-name promotions (P.L. 103-66). FAS requires businesses that are applying for brand-name assistance to certify that they are a small-sized entity based on their own assessment using the Small Business Administration’s (SBA) criteria.Since fiscal year 1994, FAS has increased the number of small businesses participating in MAP’s brand-name program and raised the total amount allocated to small businesses while decreasing the total amount allocated to large companies. The number of small businesses participating in the MAP brand-name program increased from 312 to 370 between fiscal years 1994 and 1997. Also, the share of MAP brand-name program funds allocated to small businesses increased from 41 percent to 61 percent, and the share allocated to large companies decreased from 35 percent to 16 percent during that period (see fig. 1). During the same period, the share allocated to cooperatives remained about the same, around 23 percent. According to FAS officials, these results have been achieved by conducting presentations throughout the United States encouraging small companies to promote their products overseas. In addition, FAS has provided state regional groups with additional funds to expand their outreach activities. We estimate there were 145 first-time recipients of MAP funds for brand-name promotions in fiscal year 1997. Our analysis of FAS data shows that these first-time recipients included 2 cooperatives, 125 small businesses, and 18 large companies. Legislation enacted in 1996 prohibited FAS from providing direct assistance for brand-name promotions to companies that are not recognized as small business concerns under the Small Business Act. Nonprofit trade associations, Capper-Volstead associations, and cooperatives were specifically exempted from this prohibition. As a result, FAS ended direct assistance to six large companies that had received direct assistance in fiscal year 1995. One of these large companies continued to receive MAP funds indirectly for brand-name promotions in fiscal years 1996 and 1997 by applying through two state regional groups. While the 1996 legislation prohibits only direct assistance to large companies, FAS recently decided to prohibit large companies (excluding cooperatives and certain associations) from receiving MAP brand-name funds indirectly through the trade associations, state regional groups, and state agencies. Fiscal year 1997 was the last year that FAS allowed large companies to participate, either directly or indirectly, in the MAP brand-name program. Consequently, 83 large companies that had received MAP brand-name assistance in fiscal year 1997 were expected to be eliminated from the brand-name program in fiscal year 1998. Large businesses can still take part in MAP’s generic promotions. According to FAS officials, their decision to entirely eliminate large companies from the allocation of MAP funds for brand-name promotions responded to criticisms that MAP represented “corporate welfare” and recognized that small businesses need greater assistance in exporting. FAS first issued regulations in fiscal year 1995 to implement the statutory direction to establish a graduation requirement for MAP participants. These regulations limited assistance to 5 years per specific branded product per single market. They were later revised to limit assistance to 5 years per company per country market. Our projection, based on FAS data, suggests that the graduation requirement could affect half of the cooperatives and about a quarter of the small businesses that used MAP funds in fiscal year 1997 to promote their brand-name products. These entities face the prospect of losing MAP assistance for approximately 40 percent of their current promotions, totaling $9.2 million in fiscal year 1999. However, FAS used its statutory authority in December 1998 and waived the graduation requirement for all cooperatives. The effect of this decision reduced the impact of the graduation requirement on program participation to $4.3 million, affecting only brand-name promotions conducted by small businesses. To implement the graduation requirement, FAS established regulations in February 1995 limiting a company or a cooperative to 5 years of MAP assistance per “single market” per “specific brand product.” FAS first applied the graduation requirement to companies receiving assistance for brand-name promotions in fiscal year 1994. While FAS officials recognize that many market segments can exist within a single country (for example, a particular geographic region, target audience, or demographic group), the rule defines “single market” as a “single country” to reduce the administrative burden on both the participant and FAS as well as to eliminate the need for interpretation. Under the 1995 regulations, FAS had discretion to determine whether two or more brand-name products were substantially the same product or different products. Some participants requested that FAS use its discretion to more narrowly define the term “single product.” For example, representatives from an almond producers’ cooperative told us that they thought they should be able to follow a 5-year MAP promotion of their brand-name almonds in frozen yogurt in a particular country with a MAP promotion of their almonds in ice cream because they would be promoting a different type of brand-name product. FAS revised the regulations in June 1998 to limit each company to no more than 5 years of MAP funding for brand-name promotions per country.According to FAS officials, the new regulation simplifies program administration and allows FAS to share its resources more effectively with a wider variety of U.S. exporters and markets. After 5 years of assistance in a country, FAS officials told us, a company should have established itself in that market and be able to finance 100 percent of its market development costs. Upon graduating from these markets, companies are not excluded from the MAP program, because they can receive funds for brand-name promotions in other countries or take part in MAP’s generic program. FAS officials hope that the graduation requirement might encourage companies to enter new and promising markets that have been previously ignored. Our projection of FAS data suggests that 11 of the 22 cooperatives (50 percent) and 87 of the 370 small businesses (24 percent) that received MAP funds for brand-name promotions in fiscal year 1997 could be affected by the 5-year graduation requirement in fiscal year 1999. These 11 cooperatives and 87 small businesses conducted a total of 445 brand-name promotions in fiscal year 1997, of which an estimated 183 of these promotions (or 41 percent) would not qualify for MAP funding in fiscal year 1999 if there were no waivers to the graduation requirement. The graduation requirement could impact MAP brand-name promotions in some country markets more than in others (see table 1). Almost two-thirds of MAP’s $29 million budget for brand-name promotions supported company and cooperative promotions in nine countries in fiscal year 1997. Our analysis estimates that 7 percent of the companies and cooperatives with brand-name promotions in Korea and Taiwan could graduate from MAP assistance in fiscal year 1999 compared to the approximately 25 percent share of companies and cooperatives that face graduation in Japan, the United Kingdom, and Canada. However, some country markets will not be as significantly affected by the graduation requirement; for example, 65 percent of the companies and cooperatives conducting MAP-assisted brand-name promotions in the People’s Republic of China were using the program for the first time in that country in fiscal year 1997. To study the long-term impact of the graduation requirement on MAP participation, we analyzed the top 10 cooperatives and small businesses that received MAP brand-name funds in fiscal year 1997. Our analysis estimates that 9 of the 10 recipients would graduate from at least one country market in fiscal year 1998. In addition, four recipients would face the prospect of graduating from at least half of their country markets in fiscal year 1998 (see table 2). For example, International American Supermarkets is expected to graduate in fiscal year 1998 from seven of its nine Middle Eastern markets. In addition, this company is expected to graduate from its remaining two country markets in fiscal year 2000. International American Supermarkets has received over $3.1 million (in 1997 dollars) since 1989 to promote grocery products in these markets. The impact of the graduation requirement was reduced when FAS decided in December 1998 to waive the graduation requirement for all cooperatives. While the legislation encourages graduation, it also gives the Secretary of Agriculture authority to waive the graduation requirement and extend MAP brand-name assistance beyond 5 years for a particular company if it is determined that further assistance is necessary to meet the objectives of the program. According to FAS’ Deputy Administrator for Commodity and Market Programs, FAS extended MAP assistance to all cooperatives for brand-name promotions beyond the 5 year limit for two reasons: (1) some cooperatives represent the interests of thousands of individual growers and (2) some cooperatives represent a large share of U.S. production and could be viewed as trade associations that promote a generic product. We estimated that absent a waiver, small companies and cooperatives with promotions totaling $9.2 million would have graduated in fiscal year 1998. However, the potential impact of the graduation requirement was reduced to $4.3 million when FAS waived the requirement for all cooperatives. The lower figure represents 15 percent of the $29 million MAP budget for brand-name promotions, or about 4 percent of MAP’s total budget of $118.8 million in fiscal year 1997. Of the 11 cooperatives that could have been impacted by the graduation requirement in fiscal year 1999, 4 of them have been in some country markets since the program’s inception. For example, our projections indicate that Sunkist and Blue Diamond Growers would graduate from 9 of their 14 country markets in 1998 if FAS had not waived the graduation requirement. Sunkist has received a total of $70.6 million in program funds to promote fruit in five countries, and Blue Diamond has received $27.4 million to promote almonds in four countries between 1986 and 1997 (in 1997 dollars). Beginning with the fiscal year 1994 budget allocations, participants that receive MAP funds directly from FAS must certify that the assistance supplements, not supplants, their own funding for foreign market development (the concept of “additionality”). Furthermore, trade associations, state regional groups, and state agencies must assure that applications for indirect MAP assistance include completed and accurate certification statements. The certification requirement is meant to ensure that MAP funds do not substitute for promotional expenditures recipients would have otherwise undertaken with their own funds. According to FAS officials, no recipients have been disqualified from the program because they failed to meet the certification requirement. FAS’ Compliance Review Staff (CRS) regularly audits the participants that receive direct MAP funding and verifies that these participants and the recipients they fund have completed their certification statements. To determine whether MAP assistance (generic or brand-name) has not supplanted a participant’s foreign market development expenditures, the Director of CRS told us that they review the participant’s foreign market development budget and verify that it is spending at least as much as it spent the previous year. CRS also considers variations in a recipient’s promotional strategies within a country and in new markets. According to the Director, CRS reviews supporting documentation each year for about 5 percent of all indirect recipients (15-20 companies and cooperatives). The Director reported that it is difficult to verify whether MAP funds supplement a participant’s own funds for foreign market development activities because it is hard to determine what a participant would have spent in the absence of MAP funds. According to FAS officials, they have no evidence based on the CRS audits that any participant has falsely certified regarding additionality. Nonetheless, a private consulting firm has been hired to review the effectiveness of MAP, and one component of the work plan includes a section that addresses the issue of whether MAP funds supplement or supplant the funds of MAP participants. FAS officials expect this project to provide the best analysis to date on the topic of additionality. FAS officials continue to attribute substantial macroeconomic and market-level benefits, including increased income and employment, to MAP. Specifically, FAS estimates that the cumulative effect of MAP expenditures since 1986 is $5 billion of additional agricultural exports in 1997 which, in turn, FAS says generate 86,500 jobs and $12 billion in additional economic activity. This estimate is based on the projected impact of $1.25 billion (1997 dollars) of spending between 1986 and 1997 on consumer food export promotion through MAP (including an estimated $5 million per year in Foreign Market Development Program expenditures). Our review of the recent estimates of MAP’s impact on the macroeconomy and the methodology used to derive them suggests that the benefits attributed to MAP by FAS are overstated. The model FAS used to generate these estimates assumes that all of the resources (land, labor, and other inputs) associated with additional agricultural exports would be unemployed in the absence of government market promotion efforts. As we previously reported, this approach is inconsistent with OMB cost-benefit guidelines, which instruct agencies to assume that resources would be fully employed, and leads to an overstatement of benefits of the program. In addition, FAS continues to assume that all of the market development efforts subsidized through MAP funding are in addition to what the private sector would do in the absence of the government program efforts. This position differs from the view of the Trade Promotion Coordinating Committee (TPCC). In its 1998 annual report, the TPCC concluded that government agencies currently do not have the means to measure whether exports would have taken place without government intervention and that the results of studies of net economic effects of export promotion are speculative. FAS officials directed us to academic studies that they identified as demonstrating the positive effect of MAP on agricultural exports. We examined the relevant studies of MAP’s impact in specific markets and found that they reveal mixed results. Of the studies that estimate MAP’s impact on agricultural exports in specific foreign markets, all report positive benefits in one or more of the targeted markets, but most of these studies also report that MAP funding failed to influence exports in other targeted markets. Moreover, caution should be used in interpreting the benefits ascribed to MAP in these studies, since the studies that report positive effects from MAP funding employ a methodology that results in an upward bias on the estimated benefits (see app. II for a more detailed review of these studies). Thus, it is difficult to generalize about the impact of MAP based on the results of these market-level studies. FAS officials responsible for developing agency strategic and performance plans in accordance with Results Act requirements are undertaking steps to redesign performance measures as a basis for developing market-level strategies. FAS recently requested the National Association of State Departments of Agriculture to develop performance measures in order to improve the system for evaluating MAP’s effectiveness in selected markets and for assessing the overall impact of the program. The goal of this initiative is to develop a more effective mechanism for allocating MAP program resources through new market-level studies. This initiative provides an opportunity for FAS to overcome the limitations of existing studies by carrying out a more rigorous analysis of the impact of the program. This new approach is reinforced by a direction in a recent Appropriations Committee conference report that the Secretary of Agriculture produce a comprehensive analysis of the economic impact of MAP. We obtained oral comments from FAS on a draft of this report. FAS said that it agreed with the report’s presentation of the operational changes to MAP that FAS has implemented in response to legislative direction. However, FAS officials disagreed with the report’s conclusion that their economic analyses tended to overstate MAP’s macroeconomic benefits. They said that FAS uses a standard USDA methodology to convert MAP’s estimated export impacts to “supported employment” effects. These multipliers are taken from the input-output model of the U.S. economy developed and updated each year by USDA’s Economic Research Service. They also said that they recognize that their methodology is not consistent with OMB Circular A-94 guidance that “generally, analyses should treat resources as if they were likely to be fully employed.” FAS officials said they believe that OMB’s guidance is unrealistic and unduly restrictive. FAS analysis assumes slack (less than fully employed) resources, especially labor. FAS officials cite evidence of labor unemployment as proof of slack resources in the U.S. economy. FAS officials state that their estimate of the number of jobs supported by MAP is small compared to the total number of new jobs created each month in the U.S. economy and this reinforces their belief that OMB’s full employment assumption is unrealistic for a small program like MAP. Furthermore, FAS officials note that USDA is not the only government agency that uses employment multipliers to estimate the macroeconomic benefits of exports. We note that the guidelines in OMB Circular A-94 apply to all agencies of the executive branch and for any analysis used to support government decisions to renew programs such as MAP. We believe that the guidelines provide a sound basis on which to evaluate programs such as the MAP and their contributions to the national economy. FAS also provided some technical comments and, where appropriate, they have been incorporated. To report on actions FAS took to implement legislative reforms enacted by Congress in the mid-1990s, we reviewed MAP legislation and regulations. We also interviewed and collected documents from FAS officials from the Commodity and Marketing Programs Division who are responsible for the management and oversight of MAP, as well as officials from FAS’ Compliance Review Staff and USDA’s Office of Inspector General. In addition, we interviewed and gathered documents from five MAP participants to understand how different types of program participants (that is, trade associations, state regional groups, and cooperatives) participated in the program. Our review of the program relied on data from fiscal years 1986 to 1997. At the time of our review, fiscal year 1998 data on company participation in the MAP brand-name program was not available. A fiscal year represents the year for which the MAP funds were authorized and allocated; however, these funds may have been expended the following fiscal year depending on the recipient’s marketing year. For the years of available data, we analyzed actual expenditure data, with the exception of fiscal year 1997, because only budget data was available at the time of our review. We did not verify the accuracy or completeness of the electronic data. To determine the impact of FAS’ implementation of legislative reforms to give priority to small-sized businesses when funding the MAP brand-name program in fiscal years 1994 and 1997, we analyzed changes in the number and shares of small businesses participating in MAP’s brand-name program. We also examined the size of the 22 cooperatives participating in the brand-name program for fiscal year 1997 by comparing the SBA criteria—the same criteria used by companies to qualify themselves as small-sized businesses for MAP brand-name funds—to data obtained from business references and other sources on the total number of employees and annual sales for each cooperative. To determine the impact of the graduation requirement on MAP participation, we projected the number of companies and their promotions that might be affected. Fiscal year 1998 data was not available at the time of our review, so we estimated the number of companies and cooperatives expected to graduate from certain country markets in fiscal year 1998 based on their funding history for each country. To estimate the amount of funds expected to be released due to the graduation requirement, we assumed the amount of MAP funds these graduating companies and cooperatives would have received in fiscal year 1998 would be the same as the amount they received for the country promotion in fiscal year 1997. Our review of graduation did not include any consideration of the number of years that trade associations, cooperatives, and companies had received MAP funds to support their country-specific generic promotions; this was outside the scope of our review. To determine the impact of the legislative requirement that MAP participants certify that MAP funds supplement, not supplant, their expenditures for promotions in foreign markets on MAP participation, we interviewed FAS officials responsible for the management and oversight of MAP, including representatives from FAS’ Commodity and Marketing Programs Division and Compliance Review Staff. We also reviewed compliance reports and other documents provided by the Compliance Review Staff. In order to provide a review of the economic impact of MAP, we focused our analysis on those studies that estimated or analyzed the economic impact of MAP and its predecessors (the Market Promotion Program and the Targeted Export Assistance program). We revisited some of the studies that were analyzed in a prior review of all FAS export promotion programs as well as more recent estimates by FAS of the program’s economic impact. In our review of studies of MAP’s impact on U.S. agricultural exports and related effects on employment and gross national product, we performed two tasks. First, we relied on our previous analysis of FAS’ methodology for estimating effects from MAP funding on agricultural exports, employment generation, and income effects and compared this methodology with OMB guidelines for cost-benefit analysis. We spoke with FAS officials charged with the development and implementation of the 1993 Government Performance and Results Act-mandated strategic and annual performance plans to gather their opinion of the applicability and reliability of FAS estimates and methodology. Also, we considered the methodology FAS used to derive its macroeconomic estimates from the perspective of standard economic analysis of the effects of subsidies on the target sector and related sectors. In addition, we also reviewed how the TPCC reported benefits of MAP and other export promotional spending in its annual National Export Strategy. Second, to obtain evidence on the impact of MAP on sectoral exports, we reviewed analyses provided to us by FAS as well as other applicable research analyses from academic publications of the impact of the program on particular markets. When reviewing these studies for the current analysis, we focused on both the findings of economic impact and the methodology used to derive results. The available studies focused on MAP-funded generic promotions. We synthesized this information to present an overview of the impact of MAP funding on exports and the U.S. economy. We spoke to officials at FAS and the National Association of State Departments of Agriculture, which is collaborating with FAS in developing performance indicators for the MAP program, and we reviewed the National Association of State Departments of Agriculture’s Request for Proposal for an evaluation project for MAP. We conducted our work at FAS in Washington, D.C., and completed telephone interviews with representatives from three trade associations, one cooperative, and one state regional group located throughout the United States. We performed our review from January 1998 to December 1998 in accordance with generally accepted government auditing standards. As agreed with your offices, we will send copies of this report to Senator Richard G. Lugar, Chairman, and Senator Tom Harken, Ranking Minority Member, Senate Committee on Agriculture, Nutrition, and Forestry; Representative Larry Combest, Chairman, and Representative Charles W. Stenholm, Ranking Minority Member, House Committee on Agriculture. We are also sending copies of this report to the Honorable Daniel Glickman, Secretary of Agriculture. We will also make copies available to others on request. This review was done under the direction of JayEtta Z. Hecker, Associate Director. If you or your staff have any questions concerning this report, please contact Phillip Thomas, Assistant Director, at (202) 512-9892. Major contributors to this report are listed in appendix III. Since its inception in 1986, the Market Access Program (MAP) and its predecessors, the Targeted Export Assistance program (TEA) and the Market Promotion Program (MPP), have provided funds to commercial firms and nonprofit organizations to support the promotion of U.S. agricultural goods in foreign markets. TEA was first authorized in 1985 to reverse a decline in U.S. agricultural exports and to counter the unfair trade practices of foreign competitors. Only those commodities adversely affected by unfair foreign competitor practices were eligible for assistance. When Congress reauthorized the program in 1990, it was renamed the Market Promotion Program, and assistance was no longer restricted to commodities adversely affected by unfair competitor practices. In 1993 Congress initiated three major program changes. The first directed that the Foreign Agricultural Service (FAS) give small businesses priority in the allocation of MAP funds for brand-name promotions. The second change established a graduation requirement with a 5-year limit on the use of MAP funds to promote a “specific branded product” in a “single market” unless FAS determines that further assistance is deemed necessary to meet program objectives. The third change was a requirement that each participant certify that MAP funds supplement its foreign market development expenditures. With the Market Promotion Program’s 1996 reauthorization, Congress changed the program name to MAP, and, among other things, prohibited direct assistance to companies that are not recognized as small business concerns under the Small Business Act, except for cooperatives and certain associations. The 1996 reauthorizing legislation also capped annual funding for MAP at $90 million for fiscal years 1996-2002 (see fig. I.1 for annual MAP appropriations, fiscal years 1986-97). 97 Table I.1 presents a list of all participants who received MAP funds directly during fiscal year 1997 along with the amount of MAP funding they were allocated and the percent they spent on generic and brand-name promotions. Table I.1: MAP Participants and Budgets—Generic and Brand-name—Fiscal Year 1997 American Indian Trade and Development Council American Seafood Institute/Rhode Island Seafood Council California Cling Peach Growers Advisory Board National Potato Research & Promotion Board (continued) Eastern US Agricultural & Food Export Council (USEAFEC) Mid-America International Agri-Trade Council (MIATCO) National Association of State Departments of Agriculture (NASDA) Southern United States Trade Association (SUSTA) Western US Agricultural Trade Association (WUSATA) (continued) Welch Foods Inc. (National Grape Cooperative) The 10 country markets with the largest MAP budgets in fiscal year 1997 represent all countries that had MAP generic and brand-name promotions totaling $2 million or more (see fig. I.2). Approximately 65 percent (or $77 million) of the total $118.8 million in MAP funds was budgeted for promotions in these markets in fiscal year 1997. The remaining 35 percent of the MAP funds was budgeted for generic and brand-name promotions in 90 other country markets. Approximately $2.2 million of the MAP budget in fiscal year 1997 supported efforts conducted in the United States that underpinned foreign market development activities. About 32 percent of the budget covered administrative costs expected to be incurred by four state regional groups for such items as rent, salaries, and supplies. Approximately 17 percent of the funds were budgeted for anticipated travel expenses by staff from seven trade associations and three state regional groups. Another 28 percent supported activities such as demonstrations, media, public relations, promotions, and trade shows. The majority of these funds supported preparations at the largest food export trade show in the United States. Hong Kong - $5.1 million 18 trade organizations received $3.2 generic funds 71 companies received $1.9 brand-name funds Due to rounding, this amount does not reflect the $8,000 of MAP funds supporting one brand-name promotion in the United States. A total of 475 recipients participated in the MAP brand-name program in fiscal year 1997. Four cooperatives (Sunkist, Welch Foods Inc., Ocean Spray, and Blue Diamond) received MAP funds for brand-name promotions directly from FAS. All other companies and cooperatives applied indirectly to FAS for MAP funds for brand-name promotions through trade associations, state regional groups, and state agencies. The amount of brand-name assistance awarded to each recipient ranged from $1,500 to $2.6 million; however, almost half of the awards were in amounts less than $25,000 (see table I.2). The studies that analyzed the effect of MAP and its predecessor programs were for the most part carried out under the auspices of university-affiliated institutes and organizations. Nine universities are affiliated with the National Institute for Commodity Promotion Research and Evaluation (NICPRE). NICPRE is an offshoot of the Research Committee on Commodity Promotion (NEC-63), which is a component of the land grant committee structure to coordinate research in agriculture and related fields. FAS officials identified a number of market-level studies published by NICPRE and NEC-63 that they said showed MAP’s economic benefits to agriculture through increased exports and market shares for specific commodities. Our review found that the studies provide mixed evidence of a positive impact of MAP-funded promotions at the market-level unit of analysis. The studies also vary in terms of their functional forms, assumptions, and independent variables. Some models are more completely specified in that they include variables measuring income, the prices of substitute and complementary goods, exchange rates, and long-term trends. However, others lack one or more of these important variables, raising the possibility of biased estimators due to model misspecification. The presentation of the econometric estimation of the models also varies. Some studies are rigorous, while others fail to present complete diagnostics of the model performance. Few studies show an unambiguously positive effect of government promotional activities on exports. For example, a study of the effects of FAS-funded promotions for U.S. red meat (pork, veal, and beef) in the Pacific Rim countries showed a positive result in the case of South Korea and insignificant results for the other three countries included in the analysis. Also, an analysis of the effects of government-funded promotions of meat in Japan showed a positive influence on the demand for U.S. beef but found no evidence that advertising and promotion expenditures had an expansionary effect on the demand for U.S. pork and poultry products. Additionally, a number of the market-level studies that find positive effects associated with government-subsidized programs are incomplete in their analysis and result in an upward bias on the estimated effects of MAP-funded promotions. They exclude factors that could permit program administrators to assure that the impact is positive even after accounting for increased costs. Most studies only calculate the expansion of exports associated with a dollar input of MAP advertising. For example, one study finds that “$1,000 spent in Japan yields an increased revenue of approximately $5,850” (the cumulative effect after 40 years) for U.S. walnut producers. This and similar types of estimates that report “gross returns” do not consider the production and transportation costs of these additional exports and thus fail to determine whether the promotion has positive net economic returns. Also, as one study notes, it is not always possible to take into account the potentially large advertising and promotion expenditures made by private firms, which would reduce the computed increase in exports attributed to Market Access Program efforts. It should be added that only a few of these studies take into account the effects of promotional activities on other agricultural exports or on market shares of competitor countries. Advertising and promotion of U.S. brand-name and generic products can have considerable spinoff effects (sometimes called “halo effects”), both positive and negative, for related products and competitor firms and/or countries. A study of U.S. apple exports to Singapore and the United Kingdom found that while U.S. government-subsidized marketing and advertising had a positive impact on the U.S. market share and value of exports to the United Kingdom, U.S.-funded promotions in Singapore mainly benefited the foreign competitors in the market. According to that study, which FAS officials cite as evidence of successful MAP funding, Chilean and French apple producers would be the main beneficiaries of the MAP promotions in Singapore, experiencing increases in export shares 3 to 10 times greater than the U.S. producers. This result shows the importance of taking into account both direct and indirect effects and concomitant advertising by other U.S. firms and sectors and by major competitors. In summary, the market-level studies that we reviewed revealed mixed results and do not allow generalization about MAP’s impact on agricultural exports. Estimations revealed both positive and insignificant effects associated with MAP promotional spending. In some cases, the methodology employed results in an upward bias on the estimated effect of MAP. Also, the effects on other U.S. agricultural markets or on the agricultural exports of competitor nations are unclear. Christine M. Broderick May M. Lee The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the Department of Agriculture's implementation of legislative reforms to the Market Access Program (MAP) and their impact on program participation. GAO noted that: (1) as directed by Congress, the Foreign Agricultural Service (FAS) implemented operational changes to MAP; these changes have affected program participation and distribution of funds; (2) since fiscal year (FY) 1994, FAS has increased the number of small businesses participating in MAP to promote brand-name products as well as small businesses' share of program funds; (3) as required by statute, FAS prohibited direct assistance for brand-name promotions to large companies beginning in FY 1996; (4) this prohibition does not apply to cooperatives and certain associations; (5) also, beginning in FY 1998, FAS prohibited indirect assistance to large companies; (6) FAS implemented a graduation requirement that will affect about a quarter of the small businesses with brand-name promotions totalling $4.3 million in FY 1999, as well as the number of MAP brand-name promotions conducted in individual country markets; (7) this graduation requirement also could have affected about half of the cooperatives; however, in December 1998, FAS chose to use its statutory authority and waive the graduation requirement for all cooperatives, citing special considerations; (8) since FY 1995, FAS has required all participants to self-certify that MAP funds supplement, not supplant, their activities to develop new foreign markets for their products; (9) while FAS regularly verifies that the participants and the companies they fund have completed their certification statements, FAS' Director of Compliance Review Staff reports that it is difficult to ensure that these funds are additional because it is hard to determine what would have been spent in the absence of MAP funds; (10) also, this requirement has had no apparent impact on program participation; (11) questions remain about the overall economic benefits derived from MAP funding; (12) FAS estimates of MAP's macroeconomic impact are overstated because they rely on a methodology that assumes that the resources used were not employed prior to the funding; (13) GAO noted that this is inconsistent with Office of Management and Budget cost/benefit guidelines; and (14) in addition, the evidence from market-level studies is inconclusive regarding MAP's impact on specific commodities in specific markets. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Reading First, which was enacted as part of NCLBA, aims to assist states and local school districts in establishing reading programs for students in kindergarten through third grade by providing funding through 6-year formula grants. The goal of the program is to ensure that every student can read at grade level or above by the end of third grade. To that end, Reading First provides funds and technical assistance to states and school districts to implement programs supported by scientifically-based reading research (SBRR), increase teacher professional development based on this research, and select and administer reading assessments to screen, diagnose, and monitor the progress of all students. NCLBA defines SBRR as research that (1) uses systematic, empirical methods that draw on observation or experiment; (2) involves rigorous data analyses that test stated hypotheses and justify general conclusions; (3) relies on measurements or observational methods that are valid; and (4) has been accepted by a peer-reviewed journal or approved by a panel of independent experts. Further, NCLBA requires states to adopt reading programs that contain the five essential components of reading-- (1) phonemic awareness; (2) phonics; (3) vocabulary development; (4) reading fluency, including oral reading skills; and (5) reading comprehension strategies. While Education has responsibility for overseeing the Reading First program and states’ implementation and compliance with statutory and program requirements, NCLBA places restrictions on what Education officials can require states to do. Specifically, Education is not authorized to mandate, direct, control, or endorse any curriculum designed to be used in elementary or secondary schools. Further, when Education was formed in 1979, Congress was concerned about protecting state and local responsibility for education and, therefore, placed limits in Education’s authorizing statute on the ability of Education officials to exercise any direction, supervision, or control over the curriculum or program of instruction, the selection of textbooks or personnel, of any school or school system. Every state could apply for Reading First funds, and states were required to submit a state plan for approval that demonstrates how they will ensure that statutory requirements will be met by districts. Education, working in consultation with the National Institute for Literacy (NIFL), as required in NCLBA, established an expert review panel composed of a variety of reading experts to evaluate state plans and recommend which plans should be approved. In these plans, states were required to describe how they would assist districts in selecting reading curricula supported by SBRR, valid and reliable reading assessments, and professional development programs for K-3rd grade teachers based on SBRR. The law does not call for Education to approve or disapprove particular reading programs or curricula identified in state plans. When appropriate, the peer review panel was to also recommend clarifications or identify changes it deemed necessary to improve the likelihood of a state plan’s success. NCLBA requires that Education approve each state’s application only if it meets the requirements set forth in the law. Reading First allows states to reserve up to 20 percent of their funds for professional development; technical assistance; and planning, administrative, and reporting activities. For example, states can use their funds to develop and implement a professional development program to prepare K-3rd teachers in all essential components of reading instruction. One model for supporting teachers’ reading instruction involves hiring a Reading Coach who works with teachers to implement reading activities aligned with SBRR. Almost all states require Reading First schools to have a Reading Coach tasked with supporting teachers and principals with instruction, administering assessments, and interpreting assessment data. States that receive Reading First grants are required to conduct a competitive sub-grant process for eligible school districts and must distribute at least 80 percent of the federal Reading First grants they receive to districts. NCLBA and Education guidance provides states with flexibility to set eligibility criteria for school districts so that eligible districts are among those in the state that have the highest number or percentage of K-3rd grade students reading below grade level and (1) have jurisdiction over an empowerment zone or enterprise community, (2) have a significant number or percentage of schools identified as in need of improvement, or (3) are among the districts in the state that have the highest number or percentages of children counted as poor and school- aged for the purposes of Title I. NCLBA establishes priorities that states must consider when awarding a Reading First sub-grant, while also allowing states to establish other priority areas. For instance, NCLBA requires that the state sub-grant process give priority to districts with at least 15 percent of students or 6,500 children from families with incomes below the poverty line, but states also have some flexibility to establish additional priorities, such as a demonstrated commitment to improving reading achievement. The sub-grant process along with the criteria at each stage is summarized in figure 1. Districts are required to use their sub-grant funds to carry out certain activities identified in NCLBA. For example, districts must use these funds to select and implement reading programs based on SBRR that include the essential components of reading instruction, to select and implement diagnostic reading assessment tools, and to provide professional development opportunities for teachers. Additionally, districts are permitted to use Reading First funds in support of other activities, such as training parents and tutors in the essential components of reading instruction. States are required to report to Education annually on the implementation of Reading First, including their progress in reducing the number of students who are reading below grade level. Additionally, states are required to submit a mid-point progress report to Education at the end of the third year of the grant period. These mid-point progress reports are subject to review by the same expert peer review panel that evaluated state applications. If Education determines, after submission and panel review of a state’s mid-point progress report and on the basis of ongoing Education monitoring, that a state is not making significant progress, Education has the discretion to withhold further Reading First grant payments from that state. While these state reports to Education are intended to provide information on the effectiveness of Reading First, Education is also required to contract with an independent organization outside Education for a rigorous and scientifically-valid, 5-year, national evaluation of the program, with a final report scheduled to be issued in 2007. The Reading First program has relied on several key contractors to perform a number of program functions. For example, Education officials hired RMC Research Corporation, a company that provides research, evaluation, and related services to educational and human services clients, to provide technical assistance to states and districts that have received Reading First funding. According to Education officials, RMC contractors were tasked initially with providing specific, individualized guidance on the application process to state officials who requested it. RMC later became the national coordinator for the contract overseeing the National Center for Reading First Technical Assistance and its three regional subsidiaries: the Eastern Regional Reading First Technical Assistance Center (ERRFTAC) in Tallahassee, Florida; the Central Regional Reading First Technical Assistance Center (CRRFTAC) in Austin, Texas; and the Western Regional Reading First Technical Assistance Center (WRRFTAC) in Eugene, Oregon. In this role, RMC staff provides support to the TACs and their employees, as well as weekly coordination among the TACs, and regular training seminars. Operated out of universities recognized by Education officials for their expertise in SBRR and related areas, the centers began operations in 2003 and are responsible for providing an array of technical assistance activities to states, including national and regional conferences, training and professional development, products and materials, and liaisons to national reading experts. Education officials also contracted with Learning Point Associates to provide technical assistance to states as they launched their sub-grant competitions. Once Reading First sub-grants had been awarded to local districts, Education contracted with the American Institutes for Research (AIR), a behavioral and social science research organization, to conduct annual monitoring visits to each state. These visits incorporate sessions with state officials, as well as visits to a few districts in each state and are designed to assess states’ and districts’ compliance with their approved plans. After each monitoring visit, AIR representatives submit a report, including any findings of non-compliance, to Reading First officials. Reading First officials are to forward these reports to the cognizant state officials. States reported that there have been a number of changes and improvements in reading instruction since the implementation of Reading First. There has been an increased emphasis on the five key components of reading, assessments, and professional development with more classroom time being devoted to reading activities. However, according to publishers we interviewed, there have been limited changes to instructional material. Similarly, many states that approved reading programs for districts to choose from report few changes to their lists of approved programs. In responding to our survey, 69 percent of all states reported great or very great improvement in reading instruction since inception of Reading First. One area in which states reported a change that may have contributed to improvement of reading was the degree to which classroom instruction explicitly incorporated the five key components. In our survey, at least 39 states reported that Reading First schools had incorporated each of the five required components of reading into curriculum to a great or very great degree as a result of Reading First. State and local officials we talked to during some of our site visits reinforced this opinion and in particular noted that Reading First teachers had awareness of and were more focused on the five components. In addition, the increased time devoted to reading activities under Reading First may have contributed to improvement. Several district officials we met with told us they were including a protected, uninterrupted block of time for reading instruction of 90 minutes or more per day—which the department’s Guidance for the Reading First Program lists as a key element of an effective reading program. Education’s Reading First Implementation Evaluation: Interim Report (The Interim Report) also found that Reading First teachers reported allocating over 90 minutes per day, on average, for a designated reading block. States officials reported improvement in reading instruction resulting from the use of assessments. In responding to our survey, one state official said, “One of the strengths of the Reading First program has been its strong adherence to SBRR and to the use of valid and reliable assessments in guiding instruction and program evaluation.” A number of state and local officials we interviewed reported that the use of assessments changed after Reading First, especially in the way that teachers use data from these assessments to better inform reading instruction. Specifically, district officials we talked to during our site visits reported that teachers review students’ assessment results to determine the areas in which they need more targeted instruction. One official also reported that assessment data can sometimes be used to identify successful teachers from whom other teachers can learn teaching techniques, with one official asserting that “Reading First has and is making a great impact on teachers’ instructional practices, techniques, and strategies.” Also, according to Education’s Interim Report, researchers estimated that 83 percent of Reading First teachers cited assessment results essential to organizing instructional groups, 85 percent cited the results essential to determining progress on skills, and 75 percent cited the results essential to identifying students who need reading intervention. According to our survey, most states also reported that the assessments they used differed greatly or very greatly from the ones they used prior to Reading First. States reported a wide variety of reading assessments on their state-approved lists, with over 40 different assessments listed. By far, the most frequently approved assessment was Dynamic Indicators of Basic Early Literacy Skills (DIBELS), approved by 45 states. Also, a few states reported to us that they were moving toward a more uniform systematic assessment system for the first time, whereas previously each school could choose which assessment it would use. Some state and district officials told us that having a more uniform and systematic assessment was beneficial, because, for instance, it allowed the officials to track and compare reading scores more easily. Professional development is another area in which state officials noted improvement. All states reported improvement in professional development as a result of Reading First, with at least 41 states reporting that professional development for reading teachers improved greatly or very greatly in each of five key instructional areas. Further, a considerable majority of states reported great or very great increases in the frequency of professional development and the resources devoted to it, 45 and 39, respectively. One state reported, “The provision of funding to be used to support statewide professional development efforts for K-3 reading has been an important aspect of the program.” The Interim Report on the Reading First program highlights that a vast majority of Reading First teachers had received training on the five key components of reading. In our site-visits, district officials confirmed that, for the most part, teachers in their Reading First classrooms had received training. However, in responding to our survey, 19 states did report some challenges in training of 100 percent of Reading First teachers, with teacher turnover cited by 12 states as the reason some Reading First teachers might not have taken any type of Reading First training. Figure 2 summarizes reported improvements in professional development for teachers. Professional development was provided by a variety of federal, state, and private sources. Staff from the TACs and officials from at least one state reported providing professional development to districts customized to the individual district’s needs and perceived future needs. Education’s Interim Report on Reading First implementation noted that state Reading First coordinators in 33 states reported that state staff chose and organized all statewide professional development efforts and played a key role in selecting professional development topics for districts and schools. In addition, publishers we spoke with told us they often provide training to acclimate teachers to their products. Certain publishers of major commercial reading programs and assessments told us that since the implementation of Reading First, districts demand much more training. Specifically, according to some of the publishers and TAC staff we spoke with, districts have been interested in more in-depth workshops on particular topics such as teaching techniques and using and interpreting assessments. Finally, another aspect of professional development pertinent to Reading First is the presence of a Reading Coach. State and district officials reported that Reading Coaches receive training that better enables them to assist schools. Education’s Interim Report found that each Reading Coach worked with an average of 1.2 schools and with 21 teachers to help implement activities aligned with SBRR. Three of the four major publishers of reading programs we spoke with reported that they had not made significant changes to the content of their reading programs as a result of Reading First. Two publishers stated that they made minor changes to their reading materials to make more explicit how the content of the existing programs align with the five components emphasized in Reading First. Two of them reported that they made changes to their programs based on the National Reading Panel’s findings, which was prior to the enactment of Reading First. For example, representatives of one company stated that they launched a new reading program based on the findings of the National Reading Panel that takes into account the requirements of Reading First. Despite limited changes to the actual instructional material, all the publishers noted a greater emphasis on assessing the efficacy of their reading programs as a result of Reading First. In an effort to measure the effectiveness of their programs, the publishers reported devoting more effort to research and to evaluate how effective their reading programs were at raising reading assessment scores. States followed two main approaches in selecting reading programs for districts —22 identified a state-approved list of programs for districts to select, while the other 29 did not have a state-approved list, thereby requiring districts in those states to self-select reading programs and determine, with some state oversight and subject to state approval, whether they satisfy the requirements of SBRR. Of the 22 states with approved lists, reading program publishers most frequently represented on the lists were Houghton Mifflin, McGraw-Hill, and Harcourt (see table 1). At the school level, Education found in its Interim Report that these three reading program publishers were also the most frequently used, estimating that between 11 and 23 percent of schools used programs from one of them. Additionally, of the 22 states that identified a list of approved core reading programs for Reading First, 8 already had a list of approved core reading programs for adoption by all schools in their state prior to Reading First. Only two of these states reported removing reading programs—a total of six—from their lists because they did not meet Reading First requirements. According to Education’s Interim Report, an estimated 39 percent of Reading First schools reported adopting a new core reading program at the beginning of the 2004-2005 school year in which they received their Reading First grant, in contrast with an estimated 16 percent of non-Reading First Title I schools. States used a variety of sources to help them identify and select reading programs that met Reading First’s criteria. For example, 15 of the 22 states with state-approved lists reported using the Consumer’s Guide to Evaluating A Core Reading Program Grades K-3: A Critical Elements Analysis to make this decision. Other frequently used resources include criteria in the state’s application for Reading First, information obtained at Reading First Leadership Academies provided by Education, and other states’ approved lists. Based on responses to our survey, the table below summarizes approaches states used to develop their approved lists (see table 2). Based on our survey results, 25 of the 29 states reporting that they did not have a list of approved core reading programs said they provided guidance for districts and schools to identify core reading programs. Fifteen of these states reported directing districts and schools to conduct a review of reading programs using A Consumer’s Guide to Evaluating a Core Reading Program. Other states reported providing a variety of guidance to districts to help them select reading programs supported by SBRR, including referring them to the approved lists of other states and reviews conducted by academic experts. States varied in how they exercised their flexibility to set additional eligibility and award criteria as allowed by the Reading First program, and some states reported difficulty with implementing key aspects of the Reading First program while other states did not. In the areas in which they were given flexibility, states used a variety of criteria for determining eligibility and in awarding sub-grants to eligible districts, such as awarding grants to districts that had previously received federal reading dollars. Education reported that over 3,400 school districts were eligible to apply for Reading First sub-grants in the states’ first school year of funding. Of these districts, nearly 2,100 applied for and nearly 1,200 received Reading First sub-grants in the states’ first school year of funding. In addition, 22 states reported that it was difficult or very difficult to help districts with reading scores that had not improved sufficiently. On the other hand, 28 states reported that it was easy or very easy to determine whether districts’ applications met criteria for awarding sub-grants. States varied in how they exercised their flexibility to set school district eligibility criteria for sub-grants. The Reading First program provides states with some flexibility to define eligibility criteria within the statutory guidelines. For instance, while Reading First requires that states target districts with students in kindergarten through third grade reading below grade level, states have flexibility to set eligibility criteria based on the percentage and/or number of these students within districts. While 34 states reported electing to base eligibility on a percentage of schools with students reading below grade level, 18 states reported electing to base eligibility on a number of students reading below grade level. After applying eligibility criteria, Education reported that states determined that over 3,400 school districts were eligible to apply for Reading First sub- grants for states’ first school year of funding, or about 20 percent of all school districts nationwide. However, the percentage of eligible districts varied greatly across the states, ranging from about 3 to 93 percent. Of those districts eligible to apply, 62 percent, or nearly 2,100 districts, did so, as summarized in figure 3 below. States reported a variety of reasons why eligible school districts did not apply such as the prescriptive nature of the program, differences in educational philosophy, and inadequate resources for the application process. For example, officials from a few states reported that some districts did not have the capacity to write the grant application. An official from one state reported that some districts did not have the time and the staff to complete the sub-grant process. Furthermore, an official from another state reported that the application process was too lengthy and time-consuming to complete. Nineteen states reported in our survey that they exercised flexibility in establishing priorities when awarding Reading First sub-grants. States set a variety of additional priorities for awarding grants to school districts. For instance, six states reported that they gave priority to districts that already had other grants, such as Early Reading First grants, or indicated that they could somehow use their Reading First funds in combination with other resources to maximize the number of students reading at grade level. In contrast, two states gave priority to districts that had not received other grant funding. In addition, two states gave priority to districts based on the population of Native Americans or students with limited English proficiency. After applying selection criteria, states awarded Reading First sub-grants to about 34 percent or nearly 1,200 school districts for states’ first school year of funding. This represented about 56 percent of the 2,100 eligible districts that applied and nearly 7 percent of all school districts nationwide for states’ first school year of funding (see fig. 3). Some states reported difficulty in implementing key aspects of the Reading First program. Twenty-two states reported that it was either difficult or very difficult to help districts with reading scores that had not improved sufficiently. Officials from one state said that this was difficult because it requires close examination of students reading deficiencies and the commitment of school leadership. Officials from another state reported some difficulty in improving selected reading skills of students with limited English proficiency, which are concentrated in pockets around the state. Seventeen states reported that it was either difficult or very difficult to assess how districts applied SBRR in choosing their reading program. Finally, seven states reported difficulty implementing four or more of six key program aspects listed in our survey and shown in figure 4. Officials from one of these states told us that the difficulty with implementation was due to the newness of the program for which everything had to be developed from scratch. On the other hand, states reported ease implementing other key aspects. In particular, 28 states reported that it was easy or very easy to determine whether districts’ applications met criteria for awarding sub-grants. For example, states are required to determine whether districts will adhere to the key components of the program, such as developing a professional development program or using reading assessments to gauge performance. Several states we interviewed suggested that it was easy to make this determination because some of the Reading First requirements were already in place in their states before Reading First was implemented. For example, some state officials we interviewed mentioned using reading assessments prior to Reading First. In addition, officials in one state told us that they already had a professional development program in place to train teachers on the state’s reading program. Twenty-four states reported that it was easy or very easy to identify reading programs based on SBRR. Education officials provided states a wide variety of guidance, assistance, and oversight, but Education lacked written procedures to guide its interactions with the states and provided limited information on its monitoring procedures. Education’s guidance and assistance included written guidance, preparatory workshops, feedback during the application process, and feedback from monitoring visits. Additionally, guidance and assistance were provided by Education’s contractors, including the regional technical assistance centers. For the most part, state officials characterized the guidance and assistance they received from Education officials and contractors, especially the regional technical assistance centers, as being helpful or very helpful, and many also reported relying on the expertise of Reading First officials in other states. However, Education lacked controls to ensure that its officials did not endorse or otherwise mandate or direct states to adopt particular reading curricula. For example, according to state officials, Education officials and contractors made suggestions to some states to adopt or eliminate certain reading programs, assessments, or professional development providers. In addition, some state officials reported a lack of clarity about key aspects of the annual monitoring process, including time frames and expectations of states in responding to monitoring findings. Education provided a variety of written and informal guidance and assistance to states to help them prepare their applications. For example, three months after the enactment of NCLBA in January 2002, Education issued two key pieces of written guidance to states pertaining to the Reading First program and grant application process: the Guidance for the Reading First Program and Criteria for Review of State Applications. Education officials also sponsored three Reading Leadership Academies in the early part of 2002. The Academies were forums for state education officials to obtain information and build their capacity to implement key aspects of the Reading First program, including professional development and the application of SBRR. Education contracted with RMC Research Corporation to provide technical assistance to states related to the grant application process. States reported seeking guidance from RMC on various aspects of the Reading First application, in particular the use of instructional assessments (17 states) and instructional strategies and programs (14 states). Throughout the application process, both Education and RMC officials were available to address states’ questions. In particular, Education officials provided feedback to states on the results of expert review panel evaluations of their applications. Consequently, a large number of states reported that Education required them to address issues in their applications, most commonly related to the use of instructional assessments (33 states) and instructional strategies and programs (25 states). See figure 5 for issues raised about state applications. Forty-eight states reported that they needed to modify their application at least once, and 27 reported modifying them three or more times. Once grants were awarded, Education continued to provide assistance and contracted with RMC Research to oversee three regional TACs to help states implement Reading First. RMC established three TACs affiliated with state university centers in Florida, Texas, and Oregon, which RMC and TAC officials told us were selected based on their expertise in one or more areas central to the success of the Reading First program, such as professional development or reading assessment. Each technical assistance center was responsible for providing comprehensive support to each of the states in its geographic region (see fig. 6). States reported that they looked to these centers for guidance on a variety of issues, especially creating professional development criteria, using reading assessments, and helping districts with reading scores that had not improved sufficiently. According to TAC staff, some of the most common requests they receive pertain to the use and interpretation of assessment data and use of Reading Coaches. TAC staff also told us that they catalog recurring issues or problems. In addition, according to one RMC official and some state officials, the TACs provided support to states during implementation to help them supplement their capacity and expertise in evaluating whether or not reading programs proposed by districts were based on SBRR. For instance, staff from the TAC in Florida explained that some states in their region had asked for assistance in evaluating reading programs that had been in use prior to Reading First to gauge their compliance with the requirements of Reading First. Staff from the TAC emphasized that in reviewing these reading programs, they used the criteria in each state’s approved state plan as the criteria for determining compliance with Reading First requirements. Officials in one state explained that while the staff at their state educational agency (SEA) possessed the knowledge necessary to conduct reviews of reading programs, scarce state staff resources would have made it difficult to conclude the reviews in the short time frame available. Though Education officials were aware of and initially condoned the TAC review process, Education officials advised all TACs to discontinue reviews of programs—to avoid the appearance of impropriety—after allegations were raised about Reading First officials expressing preference for specific reading programs. (Table 3 provides a summary of the types of guidance and assistance provided by Education and its contractors.) During the application and implementation phases of the Reading First program, many states came to rely on other unofficial sources of guidance, including other states’ Reading First officials, in addition to the written guidance provided by Education. For example, as noted earlier, among the 22 states that had an approved list of reading programs for Reading First districts, 15 reported using A Consumer’s Guide to Evaluating a Core Reading Program to assist them in reviewing potential reading programs. In addition, officials from 21 states reported that other states’ Reading First Coordinators provided great or very great help during the Reading First state grant application process. Further, a number of state officials reported using the information from other states’ websites, such as approved reading programs, to help inform their own decisions pertaining to the selection of reading programs. One state official explained, “With our limited infrastructure and dollars, we were never able to muster the resources needed to run an in-house programs review,” and further that, “It worked well for us to use the programs and materials review results from larger states that ran rigorous review processes.” Another state official reported that the state did not feel equipped to apply the principles of SBRR in evaluating reading programs and responded by comparing one state’s review and subsequent list of reading programs to those of a few other states to make judgments about allowable programs. Most states reported making use of and being satisfied with the primary sources of guidance available to them over the course of the Reading First application and implementation processes. For example, 46 states reported making use of the two key pieces of Education’s written guidance in preparing their Reading First applications. A majority of states also reported that these pieces of guidance provided them with the information needed to adequately address each of the key application components. For example, over 40 states reported that the guidance related to the definition of sub-grant eligibility and selection criteria for awarding sub-grants helped them adequately address these areas in their application. However, officials in eight states reported that the guidance on the use of instructional assessments did not provide them with the information needed to adequately address this area. (See fig. 7.) Overall, most state officials were also satisfied with the level of assistance they received from Education staff and their contractors in addressing issues related to the Reading First application and implementation processes. For example, state officials in 39 states reported that Education staff were of great or very great help during the application or implementation process. Additionally, officials from 48 states reported that Education officials were helpful or very helpful in addressing states’ implementation-related questions, which frequently dealt with using reading assessments and helping districts with reading scores that had not improved sufficiently. A number of state officials reported to us that they appreciated the guidance and attention they received from Reading First officials at Education. For example, one state Reading First Coordinator reported, “the U.S. Department of Education personnel have been wonderful through the process of implementing Reading First. I can’t say enough about how accessible and supportive their assistance has been.” Another state official remarked that the state’s efforts to make reading improvements “would have been impossible without their [Education officials and contractors] guidance and support.” Even officials from one state who had a disagreement with Education over its suggestion to eliminate a certain reading program characterized most of the guidance they received from Reading First officials as “excellent.” However, one state official reported feeling that the technical assistance workshops have served as conduits for Education officials to send messages about the specific reading programs and assessments they prefer. Another state official reported that, “core programs and significant progress have not been defined” and that “SBRR programs are not clearly designated.” According to responses obtained to our survey, the three TACs also provided a resource for states seeking advice on issues pertaining to the implementation of their Reading First programs. Specifically, 41 states cited the Centers as helpful or very helpful in addressing states’ inquiries related to the implementation of Reading First. In addition, on a variety of key implementation components, more state officials reported seeking information from their regional TACs than they did from Education officials (see table 4). We found that Education developed no written guidance, policies, or procedures to direct or train Education officials or contractors regarding their interactions with the states. Federal agencies are required under the Federal Managers’ Financial Integrity Act of 1982 to establish and maintain internal controls to provide a reasonable assurance that agencies achieve objectives of effective and efficient operations, reliable financial reporting, and compliance with applicable laws and regulations. When executed effectively, internal controls work to ensure compliance with applicable laws and regulations by putting in place an effective set of policies, procedures, and related training. We found that Education had not developed written guidance or training to guide managers on how to implement and comply with statutory provisions prohibiting Education officials from directing or endorsing state and local curricular decisions. Department officials told us that it was their practice that program managers should consult the Office of General Counsel if they had questions regarding interactions with grantees. Reading First officials told us that it was their approach to approve each state’s method and rationale for reviewing or selecting reading programs as outlined in each state’s plan and that state compliance with program requirements, including adherence to the principles of SBRR, would then be assessed using the provisions of these plans as the criteria. Similarly, officials from Education’s contractors responsible for conducting monitoring visits told us that they were instructed by Education to use state plans as the criteria for gauging states’ compliance with Reading First reading program requirements, but that they were provided no formal written guidance or training. A senior Education attorney who is currently working with Reading First program officials told us that he was not aware that they had used this approach and that he felt that the statutory requirements should also play an important role in the monitoring process. Following the publication of the IG’s report in September, Education’s Office of General Counsel has provided training to senior management on internal control requirements and has begun working with the Reading First office to develop procedures to guide the department’s activities. Despite the statutory prohibition against mandating or endorsing curricula and the department’s stated approach to rely on state plans, and the processes articulated in them, to assess compliance, states reported to us several instances in which Reading First officials or contractors appeared to intervene to influence their selection of reading programs and assessments. For example, officials from four states reported receiving suggestions from Education or its contractors to adopt specific reading programs or assessments. Specifically, two states reported that it was suggested that they adopt a particular reading assessment. Similarly, Education’s first IG report also documented one instance in which Reading First officials at Education worked in concert with state consultants to ensure that a particular reading program was included on that state’s list of approved reading programs. In addition, states reported that Education officials or contractors suggested that they eliminate specific reading programs or assessments related to Reading First. Specifically, according to our survey results, officials from 10 states reported receiving suggestions that they eliminate specific programs or assessments. In some cases, the same program was cited by officials from more than one state. In one instance, state officials reported that Education officials alerted them that expert reviewers objected to a reading program that was under consideration but not named explicitly in the state’s application. An official from a different state reported receiving suggestions from Education officials to eliminate a certain reading program, adding that Education’s justification was that it was not aligned with SBRR. In another instance, state officials pointed out that they had adopted a program that was approved by other states, according to the procedures in their approved state plan, but were told by Education officials that it should be removed from their list and that Education would subsequently take a similar course of action with regard to those other states as well. Also, Education officials did not always rely on the criteria found in state plans as the basis for assessing compliance. We found, for example, one summary letter of findings from a monitoring report in which Education officials wrote that “Two of the monitored districts were implementing reading programs that did not appear to be aligned with scientifically based reading research.” Officials we spoke to in that state told us that they did not feel that they had been assessed on the basis of the procedures outlined in the state’s plan, but rather that the reading program itself was being called into question. The IG also found that Reading First officials communicated to several states against the use of certain reading programs or assessments, including Rigby and Reading Recovery. Officials from a few states also reported being contacted by Education regarding district Reading First applications or reading programs. For example, officials from four states reported being contacted by an Education official about a district application under consideration and one of those states also reported being approached by staff from one of the regional technical assistance centers or another contractor for the same reason. Officials from each of these states indicated that the reason they were contacted stemmed from the reading programs being used by the districts in question. In a few cases, state officials reported being contacted by Education officials regarding the state’s acceptance of a reading program or assessment that was not in compliance with Reading First. In one instance, state officials reported that Education contacted them outside of the normal monitoring process after they had obtained information from a national Reading First database maintained by a non- profit research organization that districts in the state were using a specific reading program. Five states also reported receiving recommendations from Reading First officials or contractors to change some of the professional development providers proposed in their original grant applications. When asked about the specific providers identified for elimination, three of the states indicated that the providers identified for elimination were in-state experts. In one case, a state was told that the review panel cited a lack of detail about the qualifications of the state’s proposed professional development consultants. We also found that while Education officials laid out an ambitious plan to annually monitor every state, they failed to develop written procedures guiding its monitoring visits. For example, Education did not establish timelines for submitting final reports to states following monitoring visits, specifically how and when state officials were expected to follow up with Education officials regarding findings from the monitoring visits. As a result, states did not always understand monitoring response procedures, timelines, and expectations. While we found that most state officials we spoke with understood that they were to be monitored with the use of their state plans as the criteria, they did not always understand what was required of them when responding to monitoring findings. For example, one state official reported being unaware that the state was supposed to respond to Education officials about findings from its monitoring report. An official from another state maintained that he/she was unclear about the process the state was to follow to respond to findings, and that no timeline for responding was provided to him/her. Furthermore, one state reported that findings were not delivered in a timely manner, and another state reported that Education did not address the state’s responses to the monitoring findings. Key aspects of an effective monitoring program include communicating to individuals responsible for the function any deficiencies found during the monitoring. The Reading First program, according to state coordinators, has brought about changes and improvements to the way teachers, administrators, and other education professionals approach reading instruction for children in at-risk, low-performing schools during the critical years between kindergarten and third grade. To assist states in implementing this large, new federal reading initiative, Education has provided a wide range of guidance, assistance, and oversight, that, for the most part, states have found helpful. However, Education failed to develop comprehensive written guidance and procedures to ensure that its interactions with states complied with statutory provisions. Specifically, Education lacked an adequate set of controls to ensure that Reading First’s requirements were followed, while at the same time ensuring that it did not intervene into state and local curricular decisions. We concur with the Education IG’s recommendations that the Department develop a set of internal procedures to ensure that federal statutes and regulations are followed and we feel it is important for the Secretary to follow up on these recommendations to ensure that they are properly implemented. Additionally, we feel it is important for the department to have clear procedures in place to guide departmental officials in their dealings with state and local officials. While Education’s stated approach was to rely on state plans as its criteria for enforcing Reading First’s requirements, states reported several instances in which it appears that Education officials did attempt to direct or endorse state and local curricular decisions. Such actions would prevent states from exercising their full authority under the law and would violate current statutory restrictions. Balancing Reading First’s requirements and the limits placed on the department requires Education to have clear, explicit, and well-documented procedures to guide its interactions with the states. Failure to do so places the department at risk of violating the law and leaves it vulnerable to allegations of favoritism. Additionally, while Education’s annual monitoring effort for Reading First is ambitious, it did not provide clear guidelines and procedures to states. As a result, states were not always aware of their roles and responsibilities in responding to findings of non-compliance, and Education was not always consistent in its procedures to follow up with states to resolve findings and let states know if they had taken proper actions. Key aspects of an effective monitoring program include transparency and consistency. Letting all states know in a timely manner whether or not their plans to address deficiencies are adequate is important to ensure that findings are dealt with in an appropriate, timely and clear manner. In addition to addressing the IG’s recommendations to develop internal (1) policies and procedures to guide program managers on when to solicit advice from General Counsel and (2) guidance on the prohibitions imposed by section 103(b) of the DEOA, we recommend that, in order to ensure that the department complies with statutory prohibitions against directing, mandating, or endorsing state and local curricular decisions, the Secretary of Education also establish control procedures to guide departmental officials and contractors in their interactions with states, districts, and schools. In addition, to help the department conduct effective monitoring of the Reading First program, we recommend that the Secretary of Education establish and disseminate clear procedures governing the Reading First monitoring process. In particular, Education should delineate states’ rights and responsibilities and establish timelines and procedures for addressing findings. We provided a draft of this report to the Department of Education and received written comments from the agency. In its comments, included as appendix III of this report, Education agreed with our recommendations and indicated that it will take actions to address them. Specifically, Education said it will provide written guidance to all departmental staff to remind them of the importance of impartiality in carrying out their duties and not construing program statutes to authorize the department to mandate, direct, or control curriculum and instruction, except to the extent authorized by law. On February 7, 2007, the Secretary of Education issued a memorandum to senior officers reminding them that it is important to maintain objectivity, fairness, and professionalism when carrying out their duties. The Secretary’s memorandum also emphasizes the importance of adhering to the statutory prohibitions against mandating, directing, and controlling curriculum and instruction, and strongly encourages managers to consult with Education’s Office of General Counsel early on to identify and resolve potential legal issues. Also, according to Education’s written comments on our draft report and the Secretary’s February 7, 2007, memorandum to senior officers, annual training will be required on internal controls and this training will address statutory prohibitions against mandating, directing or controlling local curriculum and instruction decisions. Regarding its monitoring process for Reading First, in its comments, Education said that it will develop and disseminate guidelines to states outlining the goals and purposes of its monitoring efforts, revise the monitoring protocols, and develop timelines and procedures on states’ rights and responsibilities for addressing monitoring findings. Education also included in its response a summary of its actions and planned actions to address recommendations from the department’s Office of Inspector General’s recent report on the implementation of the Reading First program. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to appropriate congressional committees, the Secretary of Education, and other interested parties. Copies will also be made available upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about the report, please contact me at (202) 512-7215. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Our objective was to answer the following questions: (1) What changes have occurred to reading instruction since the inception of Reading First? (2) What criteria have states used to award Reading First sub-grants to districts, and what, if any, difficulty did states face in implementing the program? (3) What guidance, assistance, and oversight did Education provide states related to the Reading First program? To answer these questions, we collected both qualitative and quantitative information about the Reading First program from a variety of sources. We conducted a Web- based survey of the Reading First Directors in all 50 states and the District of Columbia. We also obtained and analyzed data from the Department of Education for each state on Reading First districts’ eligibility, applications and awards for states’ first school year of funding. The first school year of funding varied across states. Twenty-five states received their first year of funding in the 2002-2003 school year. Twenty-five states received their first year of funding in the 2003-2004 school year. To assess the reliability of this data, we talked to agency officials about data quality control procedures and reviewed relevant documentation. We excluded two states because of reporting inconsistencies, but determined that the data for the other states were sufficiently reliable for the purposes of this report. We also conducted semi-structured follow-up interviews with Reading First Directors in 12 states, mostly over the telephone. We conducted site visits to 4 of the 12 states. During the site visits, we met with state officials, local program administrators, and state-level technical assistance providers, as well as school officials from individual schools, including teachers, principals, and Reading First coaches. In identifying local sub-grant recipients to meet with in each state, we sought to incorporate the perspectives of urban, rural, and suburban school districts. We selected the 12 states to have diversity in a variety of factors, including geographic distribution, grant size, poverty rate, percentage of students reading at or below grade level, urban and rural distinctions, the presence of a statewide list of approved reading programs, and whether states had reported that they received guidance from Education officials advocating for or against particular reading programs or assessments. For both the survey and follow-up interviews, to encourage candid responses, we promised to provide confidentiality. As a result, state survey responses will be provided primarily in summary form or credited to unnamed states, and the states selected for follow-up interviews will not be specifically identified. Furthermore, in order to adequately protect state identities, we are unable to provide the names of particular reading programs or assessments Education officials or contractors suggested a state use or not use. We did not attempt to verify allegations made by state or local officials in their survey responses or during interviews or otherwise make any factual findings about Education’s conduct. We also visited or talked with administrators from each of the three regional Reading First Technical Assistance Centers, located in Florida, Texas and Oregon, as well as RMC Research, the federal contractor tasked with administering the contract with the technical assistance centers. We also interviewed several publishers and other providers of reading curricula and assessments, to obtain their views about changes Reading First has prompted in states, districts, and schools. We chose these providers to reflect the perspectives of large, commercial reading textbook programs that are widely represented nationwide on states’ lists of approved programs, as well as some other selected providers of reading curricula, including some that have filed complaints related to Reading First. We also interviewed Education officials about the implementation of the Reading First program. To obtain a better understanding of state program structure, as well as the nature of interactions between Education officials and state grantees, we reviewed state grant files, monitoring reports, and related correspondence for the 12 states where we conducted follow-up interviews. In addition, we reviewed NCLBA language authorizing Reading First, as well as statements of work articulating the responsibilities of the regional technical assistance centers and the contractor tasked with providing assistance to states in conducting local sub-grant competitions. We conducted our work from December 2005 through January 2007 in accordance with generally accepted government auditing standards. To better understand state implementation of the Reading First program, we designed and administered a Web-based survey of the Reading First Directors in all 50 states and the District of Columbia. The survey was conducted between June and July 2006 with 100 percent of state Reading First Directors responding. The survey included questions about curriculum; professional development; and state Reading First grant eligibility, application, award, and implementation processes. The survey contained both closed- and open-ended questions. For the open-ended questions, we used content analysis to classify and code the responses from the states such as the publishers on states’ approved lists. We had two people independently code the material, then reconciled any differences in coding. Because this was not a sample survey, there are no sampling errors. However, the practical difficulties of conducting any survey may introduce nonsampling errors, such as variations in how respondents interpret questions and their willingness to offer accurate responses. We took steps to minimize nonsampling errors, including pre- testing draft instruments and using a Web-based administration system. Specifically, during survey development, we pre-tested draft instruments with one expert reviewer and Reading First Directors in four states during April and May 2006. In the pre-tests, we were generally interested in the clarity of the questions and the flow and layout of the survey. For example, we wanted to ensure definitions used in the survey were clear and known to the respondents, categories provided in closed-ended questions were complete and exclusive, and the ordering of survey sections and the questions within each section was appropriate. On the basis of the pre- tests, the Web instrument underwent some slight revisions. A second step we took to minimize nonsampling errors was using a Web-based survey. By allowing respondents to enter their responses directly into an electronic instrument, this method automatically created a record for each respondent in a data file and eliminated the need for and the errors (and costs) associated with a manual data entry process. To further minimize errors, programs used to analyze the survey data were independently verified to ensure the accuracy of this work. Reading programs under Reading First must include rigorous assessments with proven validity and reliability. Assessments must measure progress in the five essential components of reading instruction and identify students who may be at risk for reading failure or who are already experiencing reading difficulty. Reading programs under Reading First must include screening assessments, diagnostic assessments, and classroom-based instructional assessments of progress. Bryon Gordon, Assistant Director, and Tiffany Boiman, Analyst-in-Charge, managed this engagement and made significant contributions to all aspects of this report. Sonya Phillips, Sheranda Campbell, Janice Ceperich, and Andrew Huddleston also made significant contributions. Jean McSween provided methodological expertise and assistance. Sheila McCoy and Richard Burkard delivered legal counsel and analysis. Susannah Compton, Charlie Willson, and Scott Heacock assisted with message and report development. | The Reading First program was designed to help students in kindergarten through third grade develop stronger reading skills. This report examines the implementation of the Reading First program, including (1) changes that have occurred to reading instruction; (2) criteria states have used to award sub-grants to districts, and the difficulties, if any, states faced during implementation; and (3) the guidance, assistance, and oversight the Department of Education (Education) provides states. GAO's study is designed to complement several studies by Education's Inspector General (IG) in order to provide a national perspective on some of the specific issues being studied by the IG. For this report, GAO administered a Web-based survey to 50 states and the District of Columbia, and conducted site visits and interviews with federal, state, and local education officials and providers of reading programs and assessments. States reported that there have been a number of changes to, as well as improvements in, reading instruction since the implementation of Reading First. These included an increased emphasis on the five key components of reading (awareness of individual sounds, phonics, vocabulary development, reading fluency, and reading comprehension), assessments, and professional development with more classroom time being devoted to reading activities. However, according to publishers we interviewed, there have been limited changes to instructional material. Similarly, states report that few changes occurred with regard to their approved reading lists. States awarded Reading First sub-grants using a variety of different eligibility and award criteria, and some states reported difficulties with implementing key aspects of the program. After applying federal and state eligibility and award criteria, Education reported that over 3,400 districts were eligible to apply for sub-grants in the states' first school year of funding. Of these districts, nearly 2,100 applied for and nearly 1,200 districts received Reading First funding. Education officials made a variety of resources available to states during the application and implementation processes, and states were generally satisfied with the guidance and assistance they received. However, Education developed no written policies and procedures to guide Education officials and contractors in their interactions with state officials and guard against officials mandating or directing states' decisions about reading programs or assessments, which is prohibited by the No Child Left Behind Act (NCLBA) and other laws. Based on survey results, some state officials reported receiving suggestions from Education officials or contractors to adopt or eliminate certain reading programs or assessments. Similarly, the IG reported in September 2006 that the Department intervened to influence a state's and several school districts' selection of reading programs. In addition, while Education officials laid out an ambitious plan for annual monitoring of every state's implementation, they did not develop written procedures guiding monitoring visits and, as a result, states did not always understand monitoring procedures, timelines, and expectations for taking corrective actions. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
DHS and other federal agencies help administer access control efforts across a wide range of physical facilities and assets in critical infrastructure sectors for which they are responsible. These federal administrators help operators of critical infrastructure assets safeguard the assets against attacks, sabotage, theft, or misuse while facilitating legitimate access to help ensure the flow of business and operations. In efforts to serve operator needs, administrators must also ensure compliance with federal laws and regulations. Federal agencies play a variety of roles in helping to strike this balance, including but not limited to (1) owning and operating certain types of infrastructure, (2) wholesale operation and management of credentialing programs for specific kinds of infrastructure, (3) partial operation and management of credentialing programs, and (4) providing regulations and guidance to help owners and operators implement effective access control. For example, DHS’s Transportation Security Administration (TSA) manages the entire Transportation Worker Identification Credential (TWIC) qualification process including enrollment, background checks, and credential issuance. However, for the Secure Identification Display Area (SIDA) badge, which facilitates access at airports, and is managed in part by TSA, airport operators use TSA’s background check information to ultimately make final decisions about airport access and badge issuance. Similarly, NRC issues regulations related to access control requirements, which are to be implemented by commercial nuclear power plants, and DOD owns and operates U.S. military installations and facilities and uses the Common Access Card (CAC) as one method to facilitate access to semi-restricted areas within the installations. Workers who need access to multiple types of critical infrastructure to realize their livelihoods—such as truck drivers and carpenters—often encounter different access control efforts. For example, carpenters and contractors working at seaports and airports may require both a TWIC credential for the seaports and SIDA badges for each specific airport. Similarly, industries that work across different critical infrastructure sectors may encounter multiple federal access control efforts. For example, a company producing or storing regulated chemicals on both land and at seaports may encounter different access control efforts depending on the location of the facility. See Table 1 for a list of selected federally-administered critical infrastructure access control efforts and a brief description of each effort. While the six selected federally administered access control efforts we reviewed had varying purposes, standards, or agency responsibilities, they generally included the following process components or phases of DHS’s credentialing lifecycle as depicted in Figure 1. Although the six efforts we reviewed generally follow similar processes, certain characteristics within these efforts can vary. For instance, we found that roles and responsibilities of the federal administrators and the operator stakeholders in credentialing varied. As an example, TSA is responsible for implementing the entire TWIC credentialing process including enrollment and background checks, while maritime port facility operators—public port authority or privately operated facilities—are responsible for physically verifying the credentials that TSA has issued at ports. In contrast, under the SIDA program, TSA and airport operators each have certain responsibilities for several elements of the credentialing process, including the criminal history record check. Table 2 summarizes the credentialing processes along with the roles and responsibilities of government and private entities for the six selected efforts we reviewed. Appendix I provides more detailed and specific information about each of the six selected efforts we reviewed. As previously mentioned, federally-administered access control efforts generally involve two groups of stakeholders: users and operators. Users are individuals who require access to critical infrastructure as an essential function of their job. Users we interviewed that require access to multiple types of critical infrastructure said they recognize the need for security, but are interested in streamlined access control efforts to facilitate legitimate access in a manner that minimizes the related time and costs they incur. They also told us they desire the maximum possible uniformity across standards for background investigation and disqualifying offenses to enhance predictability. Operators are individuals or groups who own or are responsible for managing facilities, such as airports, seaports, and chemical facilities, which may be privately owned, but can also include other government-owned facilities such as military installations. Operators, we spoke with, who are responsible for providing security for critical infrastructure, said they need to maintain control over who enters their facilities so they can manage their accepted level of risk along with the associated costs. Operators said they prefer to retain maximum decision-making authority for granting access as well as the type of credential they use to verify proper vetting. Based on our interviews with stakeholder groups and associations, the issues mentioned that had an impact on users and operators included (1) operators may add access requirements to vetting and background checks already conducted for federally administered programs; (2) credentials that cannot be used within and across critical infrastructure sectors; and (3) enrollment information that has to be entered multiple times for the same user for similar purposes. It is important to note that although these issues can present challenges for various users and operators, they do not necessarily reflect a deficiency on the part of any specific access control effort or stakeholder group. For the most part, these six selected efforts were created separately in response to different needs, are largely governed by different laws and regulations, and were not necessarily designed to work together. User groups we interviewed expressed a desire to be able to predict denial of access based on clear and standardized requirements; while operator groups described the need for some variability in requirements across sites, so they can manage their context-specific risks. Part of the eligibility vetting process for all the six selected access control efforts we reviewed includes determining if an applicant is on the known or suspected terrorist list or has a criminal history with certain disqualifying offenses that warrant denial of the access. Specific disqualifying offenses can vary across these federal efforts because of differences in the statutes that established the federal efforts. This variability can create some level of complexity for the users of multiple federally-administered efforts, which is compounded when individual on-site critical infrastructure operators impose additional requirements. For example, according to association members representing carpenters, the lack of consistency around whether individuals can qualify for access has led to difficulties aligning staff with critical project tasks. As a result, the time associated with identifying disqualifying offenses can lead to challenges with meeting scheduled project timelines and budgets. For all six selected federal access control efforts we reviewed, regardless of the way the access effort was structured (whether the infrastructure is government or privately owned and whether an effort is wholly managed by a federal agency or guided by regulation), we found that on-site operators can make the final decision about who can enter their facilities. During our interviews with users and operators we found that on-site operators across multiple infrastructure types have considered additional disqualifying offenses beyond federal baseline requirements. For example with SIDA, CFATS, and commercial nuclear power plants that are regulated under NRC the individual operator examines the individual’s criminal background information and makes his or her own determination regarding access based on their perception of acceptable risk. In addition, we found that site-specific decision making was taking place under federal efforts for which the government was the sole vetting authority, such as TWIC. For example, port authority representatives told us that ports often perform site-specific background checks even for those individuals with an issued TWIC. Port authority representatives provided two key reasons for conducting their own site-specific background checks on top of the federal government’s process: (1) the ability to view an individual’s comprehensive and recent criminal history and (2) the ability to consider factors that may not be covered in TWIC’s list of disqualifying offenses. Some operator groups that we spoke with noted that the disqualifying offenses covered by programs like TWIC, which is designed to limit terrorism risk, do not cover the full range of safety and security concerns that are ultimately their responsibility to control. For example, a representative from the American Association of Airport Executives told us that airports have the discretion to consider requirements beyond federal regulations, which can be used to disqualify applicants for a SIDA. These additional requirements may reside in state or local ordinance, and can vary from airport to airport. Consequently, some operators perform additional vetting on site, which allows them to align vetting policies and procedures with their accepted types and level of risk. Even when the federal government has sole vetting authority, facility operators and military installation commanders can choose to add additional vetting procedures to ensure they are managing their facilities based on their own accepted level of risk. For example, in 2009, DOD issued a policy directive to accept, among others, the TWIC and the CAC as identification documents authorized to facilitate physical access to installations. However, according to DOD headquarters officials, the military services maintained that the TWIC was not intended to be used for access to military installations, and consequently this policy has not been implemented uniformly across DOD. For example, truck drivers holding TWIC cards and serving military installations have been at times required to undergo additional background and security checks. According to trucking industry representatives, inconsistency across DOD installations is a source of concern as they do not know what might be required of drivers who are trying to gain access. In addition, delays in gaining access to installations can result in increased costs for the truck drivers, and potentially create cascading delays for their subsequent deliveries. Installation commanders have been given the authority to supplement the DOD procedures and process for accessing their installation to help ensure appropriate response to risk with real-time information and decision making. Consequently, the requirements for access can vary by installation. A TSA official also noted that certain sex offenders may be able to get a TWIC, depending on the offense and when the individual was convicted or released from incarceration. The TSA official stated that this is because sexual offenses are not permanently disqualifying under the TWIC statute and may not point to a terrorism or security risk of a regulated maritime facility; however, a military commander may not want to allow that individual onto his or her installation where families with young children are housed and so may consider such offenses disqualifying. User groups we interviewed generally expressed a desire for reciprocity across federally-administered access control efforts, in particular when such efforts have or appear to have the same or similar underlying vetting processes and associated risks. Operators, on the other hand, had mixed perspectives on this issue. While some operators emphasized finding solutions to enhance access control reciprocity, others cited barriers to or challenges with a more uniform approach. Among the six selected access control efforts we reviewed, we found limited mechanisms to use one credential for access to similar facilities within and across sectors (i.e., reciprocity). Two examples of reciprocity are DOD’s CAC, which generally allows access into the semi-restricted areas of most military installations, and the CFATS Personnel Surety Program, which allows regulated high-risk chemical facilities to accept previously-issued TWICs to grant access if they are electronically verified, or other credentials issued through a federally administered screening program, if they are visually verified, and if the screening program periodically vets enrolled individuals against the Terrorist Screening Database. Across the chemical sector, chemical facility access is facilitated by two different access control efforts depending on where the chemical facility is located—land-based facilities are governed by CFATS and maritime-based facilities are governed by TWIC. Officials from NPPD, the DHS component that administers CFATS, stated that NPPD explored allowing land-based chemical facility users to enroll in the TWIC program, but DHS has interpreted the Maritime Transportation Security Act of 2002 to provide limited authority to do so. However, individuals in the field of transportation who are eligible for a TWIC may apply for and receive that credential to satisfy the CFATS requirement. NPPD officials stated that facilities may also use screening results from other agencies, such as the Bureau of Alcohol, Tobacco, Firearms, and Explosives, as long as the vetting process includes checking against the Terrorist Screening Database. The user stakeholders we interviewed expressed a desire to be able to enter their biographic information once during the registration and enrollment phase, and have that information reused for other access control efforts, and where possible, for background check processing. Some operator groups we interviewed indicated that it was costly and inefficient for operators and users to enter biographic data multiple times. However, federal administrators are limited in their ability to share biographic information across screening efforts because of information technology, and privacy considerations. Among the six access control efforts we reviewed, there are some mechanisms to reuse biographic information; however, there are no set requirements to do so. For example, operators may collect complete biographic information each time a user applies for a SIDA badge for an airport facility. A user group said that it would like to be able to reuse their biographic information for airports, but TSA officials we interviewed stated that any proposed solution to reuse biographic information would be affected by privacy considerations. Under federal law, personal information collected and maintained by an agency for a particular effort may not be disclosed to another agency, with certain exceptions. In contrast, within NRC’s regulated commercial nuclear power plants, operators use the Personnel Access Database System (PADS) in cooperation with NRC that allows users to provide biographic information once to access multiple facilities because potential employees sign a release of information form to use the system. Users and operators agreed that they benefited from the ease of PADS because they do not have to submit biographic information for each facility. They told us that PADS allows for employee data to be shared across NRC nuclear power plant facilities in part because it is an industry-operated system that is not constrained by federal privacy requirements that would apply to federal systems. DHS has established roles and responsibilities for supporting collaboration efforts among key stakeholders across critical infrastructure sectors. The department also uses partnership structures to enhance information sharing efforts aimed at strengthening critical infrastructure security. According to Presidential Policy Directive/PPD-21 (PPD-21), DHS is responsible for coordinating the overall federal effort to promote the security and resilience of the nation’s critical infrastructure, provide strategic guidance, and promote a national unity of effort, among other responsibilities. Within DHS, NPPD’s Office of Infrastructure Protection (IP) leads the coordinated national effort to mitigate risk to the nation’s critical infrastructure and is responsible for working with public and private sector critical infrastructure partners to enhance security efforts. Using a partnership approach, NPPD IP’s Sector Outreach and Programs Division works with owners and operators of the nation’s critical infrastructure to develop, facilitate, and sustain strategic relationships and information sharing efforts, including the sharing of best practices. NPPD IP also oversees and supports various partnership councils intended to protect and provide essential functions to enhance response efforts. As reported in the National Infrastructure Protection Plan (NIPP), DHS has created partnership structures to collaborate and engage federal and nonfederal stakeholders in critical infrastructure discussions and to enhance critical infrastructure resilience efforts. These voluntary partnership structures provide forums for critical infrastructure stakeholders—federal, state, local, tribal, territorial, and private sector officials—to come together, exchange ideas, and leverage resources. The Critical Infrastructure Partnership Advisory Council (CIPAC) serves as a forum among critical infrastructure stakeholders to facilitate interaction and coordination of critical infrastructure activities, including planning, coordinating, and exchanging information on cross-sector issues and implementing security and resilience program initiatives. CIPAC membership consists of representatives from the Government Coordinating Councils (GCC) and Sector Coordinating Councils (SCC)— federal, state, and local agency officials and private owners and operators, respectively—who work together to coordinate strategies, activities, and policies across governmental entities within each of the 16 critical infrastructure sectors. The NIPP also establishes voluntary cross-sector councils to develop national priorities related to strengthening critical infrastructure security. Specifically, the Critical Infrastructure Cross-Sector Council provides a forum for SCCs to address cross-sector issues and interdependencies. This council’s activities primarily focus on identifying and disseminating critical infrastructure security and resilience best practices across sectors, and identifying areas where cross-sector collaboration could advance national priorities. Additional cross-sector councils representing state, local, tribal, and territorial partners serve as forums for members to (1) facilitate enhanced communication and coordination across sectors, (2) evaluate and promote implementation of risk-informed critical security and resilience programs, and (3) promote resilience activities in the public and private sectors, mainly through awareness, education, and mentorship on a wide variety of subjects, among other activities. Within NPPD, the Interagency Security Committee (ISC) serves as a forum for chief security officers and other federal agency officials to develop federal security standards and policies to enhance physical security of non-DOD federal facilities and engage with industry stakeholders to advance best practices. Collectively, these voluntary DHS partnership structures are designed to provide federal agencies a better understanding of the risks associated with critical infrastructure security and an enhanced awareness to make informed decisions about critical infrastructure priorities. According to NPPD senior officials, DHS voluntary partnership structures exist to discuss a variety of issues that have an impact on critical infrastructure security, but DHS has not used these structures to identify opportunities to harmonize regulated screening and credentialing efforts. The issues discussed earlier in this report about users’ and operators’ experiences across different access control efforts illustrate that there are administrative burden and costs both within and outside of government when the efforts are inconsistent or their administration appears to be less efficient. However, those findings also highlight that there are few, if any, obvious solutions, as many of the issues involve tradeoffs across competing needs of different stakeholder groups and ongoing consideration of the appropriate balance to manage risk without unnecessarily impeding business and operations. In that regard, NPPD officials stated there are challenges, and developing a one-size fits all approach to harmonizing credentialing procedures is not a feasible solution because of the complexities within and across critical infrastructure sectors. Nonetheless, they acknowledged that finding opportunities to harmonize efforts is a worthwhile goal to pursue. Guidance from DHS partnership structures and our best practices call for entities to identify and share best practices and to collaborate by seeking means to address needs by leveraging resources and establishing compatible policies, procedures, and practices. Specifically, the CIPAC charter document, calls for CIPAC to facilitate interaction among federal government, private sector, and state, local, territorial, and tribal entities to conduct deliberations and form consensus positions to assist the federal government in engaging in implementing security and resilience program initiatives, including conducting operational activities related to critical infrastructure security, sharing threat, vulnerability and risk information, and best practices with one another. Similarly, our work on enhancing collaboration across organizational boundaries calls for entities to, among other things, (1) identify and address needs by leveraging resources and (2) establish compatible policies, procedures, and other means to operate across agency boundaries. Given NPPD IP’s role as the DHS component responsible for leading the national effort to strengthen the security and resilience of the nation’s critical infrastructure, DHS is well positioned to facilitate collaboration across stakeholder groups—users, operators, and federal administrators—to identify opportunities to harmonize access control efforts across critical infrastructure sectors. According to NPPD officials, the CIPAC partnership structure would serve as an appropriate forum for critical infrastructure stakeholders to discuss potential harmonization efforts moving forward. However, NPPD IP officials responsible for overseeing CIPAC and ISC stated that their cross-sector partnership structures have engaged in limited efforts to explore harmonization of access control efforts, because harmonization has not been raised as a key issue or urgent concern by its members. However, NPPD IP officials stated that issues raised when considering the user perspectives alongside the operator perspectives would not necessarily have emerged in these groups, because as of October 2016, none of the existing CIPAC partnership forums would be appropriate for users or user groups—such as contractors, workers, and others seeking access to multiple critical infrastructure facilities—to share their experiences or concerns. As of October 2016, DHS does not have a dedicated partnership structure that allows for users to share their experiences in navigating through federal access control efforts. Additionally, DHS officials stated that users are not specifically included in the NIPP’s Sector Partnership Model. Moreover, NPPD IP officials from the Sector Outreach and Programs Division, who are responsible for coordinating DHS’s partnership structures, stated that government and industry stakeholders have begun initial discussions to enhance information sharing efforts, which could include leveraging information across access control efforts. Specifically, NPPD IP officials reported that during a biannual meeting in July 2016, CIPAC members discussed ways to improve information sharing efforts between government and industry stakeholders related to harmonizing access control efforts. Further, they reported that government and industry stakeholders agreed to create a CIPAC standing committee designed to identify key concerns and engage with members to propose recommendations aimed at enhancing information sharing efforts. Although this effort represents a step towards beginning the discussion of harmonizing access controls efforts, DHS has not fully engaged all relevant stakeholders, specifically users, to explore whether additional opportunities exist to harmonize access control efforts across critical infrastructure sectors. Using existing partnership structures or creating new forums could help DHS more effectively fulfill its role as the facilitator of shared best practices and enhanced collaboration across critical infrastructure partners. In doing so, DHS may be better positioned to identify and implement opportunities to enhance efficiencies within and across related access control efforts. A role of SCO, according to DHS Office of Policy officials, is to serve as a department-wide policy advocate for coordination and harmonization of credentialing and screening efforts within DHS. SCO, which is located in the DHS Office of Policy, maintains roughly 30 full-time equivalent staff across different portfolio teams, such as Identity and Credentialing and Watchlisting and Vetting. SCO officials stated that while it is not the sole entity responsible for assessing and harmonizing screening processes across the department, the office provides subject matter expertise and guidance on screening and credentialing policies and practices with the aim of reducing duplicative, stand-alone DHS programs and processes. SCO works with DHS components that are responsible for overseeing screening and credentialing efforts, such as TSA and NPPD, to achieve DHS’s screening and credentialing harmonization objectives. These objectives include identifying and resolving policy issues and program challenges associated with screening and credentialing, supporting department-wide resources that service screening and credentialing efforts, integrating interdependent resources and processes across DHS programs, and representing DHS to external stakeholders. SCO officials reported that their primary activities fall into three general categories: consultant activities, investment-related decision activities, and working group participatory activities. Specifically, SCO officials stated that they assist DHS components in developing and improving credentialing and screening programs by participating in department-wide budget decisions, and through departmental or component-specific working groups that help guide the development of new programs or the restructuring of existing programs. According to officials, SCO relies on two foundational policy documents as the overarching strategic framework for promoting harmonization and instructing components on methods for improving access control programs and processes—the 2006 Credentialing Initiative Report (CIR) and the 2008 Credentialing Framework Initiative (CFI). The CIR identified common problems, challenges, and areas where DHS could improve screening and credentialing programs and processes. Examples of identified problem areas include inconsistent vetting processes for similar programs and the issuance of multiple credentials in cases where one would be sufficient. The report also identified four recommendations for addressing the aforementioned problems. As part of its efforts to address the recommendations outlined in the CIR, SCO published the CFI, an implementation strategy document designed to guide investments and improve the department’s ability to meet its mission by improving screening and credentialing processes. SCO officials stated that they have engaged with DHS components to advance screening and credentialing efficiencies over the past ten years of operation. Through internal annual accomplishment reports and in interviews, SCO provided several examples of activities they have undertaken to advance each of the recommendations outlined in the CIR to advance screening and credentialing efficiencies. Recommendation 1: Design credentials to support multiple licenses, privileges or status. SCO led a Common Enrollment Coordinating Council (CECC) sub-team, which was tasked to identify opportunities to develop best practices in DHS’ screening and credentialing enrollment environment. Of the 18 recommendations produced by the CECC sub-team, three were approved by the Joint Requirements Council, which plans to escalate recommendations to DHS leadership for study and possible implementation. Recommendation 2: Vetting processes, associated with like uses and like risks, should not be duplicative. SCO partnered with NPPD and TSA to implement the CFATS Personnel Surety Program, which requires that individuals seeking access to restricted areas or critical assets within high-risk chemical facilities are vetted for ties to terrorism. According to SCO and NPPD officials, SCO worked with NPPD to ensure that CFATS vetting standards were aligned with existing DHS vetting efforts to allow the use of screening resources from TSA. Recommendation 3: Entitlement to a license, privilege, or status should be verified using electronic scanning technology. SCO officials stated they consulted with the U.S. Coast Guard to develop draft regulations pertaining to the implementation of electronic card readers at maritime facilities to more effectively validate the authenticity of TWIC cards. SCO officials stated that many maritime facilities are currently validating TWICs using visual inspection, and these regulations are designed to help reduce that practice. As we have previously reported, the reliance on the visual inspection of TWICs is vulnerable to the use of counterfeit credentials to gain access. Recommendation 4: Establish a preference for ‘enroll once, use many’ environments. SCO officials stated that they consulted with TSA and the U.S. Coast Guard to ensure that certain biographic data elements collected by TSA from maritime workers, as well as the results of TSA’s terrorist screening check for the TWIC program, were available for individuals also applying for a U.S. Coast Guard-sponsored Merchant Mariner Credential (MMC). According to SCO officials, the result of such efforts was partial reciprocity between the TWIC and MMC programs. In its early years, SCO operated under the direction of the strategic policy vision and implementation plans laid out in the 2006 CIR and the 2008 CFI; however, since then, SCO has not updated the goals and objectives outlined in the implantation plans. The 2008 CFI lists a number of structured tasks necessary to implement its recommendations, including the development of a communications timeline for stakeholder engagement and the development and periodic update of CFI implementation goals and objectives. SCO officials stated that the implementation plans are no longer relevant to SCO’s current role in the department. Moreover, in our discussions with SCO officials they described several opportunities to harmonize screening and credentialing efforts that DHS had yet to achieve, such as the integration of information technology systems. Officials from the DHS Office of Policy, which oversees SCO operations, stated that Office of Policy goals and objectives for SCO come directly from the DHS Office of the Secretary. However, our review of office goals from fiscal years 2015 and 2016 showed that none of the Office of Policy’s goals specifically tasked SCO with actionable goals or objectives in support of the strategic policy vision outlined in the CIR and CFI. Additionally, no guidance from the Secretary’s office was issued to SCO from 2009 to 2014. SCO officials stated that their internal planning processes are largely informal rather than a systematic approach to identifying and documenting strategic goals and objectives that could help SCO management pursue the most promising opportunities to support DHS’s harmonization efforts and monitor how well its routine activities align with those goals and objectives. Standards for Internal Control in the Federal Government calls for agencies to define objectives clearly to meet its mission, strategic plan, goals, and requirements of applicable laws and regulations. Further, the standards call for management to define objectives in specific and measurable terms so they are understood at all levels of the entity. This involves clearly defining what is to be achieved, who is to achieve it, how it will be achieved, and the timeframes for achievement. Without updated goals and objectives, SCO is missing an important management control to help it ensure that it supports the best opportunities for DHS-wide screening and credentialing harmonization. Balancing the need to secure critical infrastructure while promoting a harmonized screening and credentialing process to access critical infrastructure continues to pose challenges for stakeholders—users and operators—because their interests vary and are not necessarily aligned with each other. DHS is responsible for leading the federal government’s effort to protect the nation’s critical infrastructure, and has created partnership structures to support stakeholder collaboration. Therefore it is well-positioned to explore whether opportunities exist among all stakeholders, including users, to harmonize screening and credentialing processes to provide access in a timely manner. Although DHS does not have a specific partnership structure dedicated for users to share their experiences, DHS’s existing partnership structures or new forums could serve as platforms for all critical infrastructure stakeholders to learn from one another and discuss available options to leverage resources. Using new or existing partnership structures to explore whether opportunities exist to harmonize screening and credentialing processes across critical infrastructure sectors could better position DHS to more effectively balance the need to secure critical infrastructure while promoting harmonized screening and credentialing process. Within DHS’s Office of Policy, the Screening Coordination Office (SCO) is responsible for the coordination and harmonization of screening and credentialing efforts department wide. Although SCO issued foundational policy documents in 2006 and 2008 outlining a strategic framework and implementation plans to harmonize DHS access control efforts, since that time SCO has not updated its goals and objectives to identify improvements needed. Goals and objectives in support of SCO’s strategic framework would better position it to pursue the highest priorities and best opportunities for DHS-wide screening and credentialing harmonization. To enhance its ability to fulfill its role as the facilitator of cross-sector collaboration and best-practices sharing, we recommend that the Secretary of Homeland Security direct the Assistant Secretary of Infrastructure Protection, Office of Infrastructure Protection, take the following action: Explore with key critical infrastructure partners, whether and what opportunities exist to harmonize federally-administered screening and credentialing access control efforts across critical infrastructure sectors. To help ensure that SCO uses its time and resources to pursue the most efficient and effective screening and credentialing harmonization goals on behalf of the department, we recommend that the Secretary of Homeland Security direct the Deputy Assistant Secretary for Screening Coordination, Office of Policy, take the following action: Establish goals and objectives to support its broader strategic framework for harmonization. We provided a draft of this report for review and comment to DHS, NRC, and DOD for their review and comment. DHS and NRC provided written comments, which are reproduced in Appendix II and III. In their comments, DHS concurred with each recommendation and described actions underway or planned to address them including estimated timeframes for completion. If fully implemented, these actions should address the intent of the recommendations and better position DHS to balance the need to secure critical infrastructure while promoting a harmonized screening and credentialing process to access critical infrastructure. For example, in regards to exploring whether and what opportunities exist to harmonize federally-administered screening and credentialing access control efforts across critical infrastructure sectors, DHS noted that they are working to harmonize access control efforts across critical infrastructure as much as practical and remain committed to working towards that end with interagency partners. Specific actions identified to be completed around April 2017 include considering drafting a plan that will include an analysis of how to further explore opportunities to harmonize federally-administered screening and credentialing access control efforts across critical infrastructure sectors. More specifically, the Interagency Security Committee will request that its Steering Subcommittee discuss potential avenues for addressing any gaps and areas of further collaboration related to screening and credentialing access control efforts of federal facilities. In regards to establishing goals and objectives to support the Screening Coordination Office’s (SCO) broader strategic framework for harmonization, DHS identified actions to direct SCO to establish updated goals and objectives to support the broader strategic framework for more efficient and effective vetting. SCO will provide their goals and objectives to DHS components once finalized to be completed by June 2017. DHS and DOD also provided technical comments, which were incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees; the Secretaries of Homeland Security and Defense, and the Chairman of the Nuclear Regulatory Commission. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (404) 679-1875 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made significant contributions to this report are listed in Appendix IV. To address the research question related to describing key characteristics of selected federal access control efforts, we distributed a standard set of questions to three federal agencies—Department of Homeland Security (DHS), Nuclear Regulatory Commission (NRC) and the Department of Defense (DOD). Our questions reflected the screening and credentialing life cycle stages reported by DHS’s Screening Coordination Office, including Registration and Enrollment, Vetting, Issuance, Expiration and Revocation, Redress, and Waiver. Tables 3 through 8 below summarize the aggregated responses received from the 3 agencies to our questions. In addition to the contact named above, Kathryn Godfrey (Assistant Director), Amber Edwards (Analyst-in-Charge), Josh Diosomito, Adrian Pavia, Vijay Barnabas, Tracey King, Richard Hung, Lorraine Ettaro, Dominick Dale, Marc Schwartz, and Joseph Kirschbaum made key contributions to this report. | Critical infrastructure protection access controls limit access to those with a legitimate need. DHS is the lead federal agency for coordinating critical infrastructure protection efforts with other federal agencies, and partnering with nonfederal stakeholders. The National Defense Authorization Act of 2016 included a provision for GAO to review critical infrastructure access control efforts. This report examines (1) key characteristics of selected federally-administered critical infrastructure access control efforts and factors that have an impact on stakeholders' use of them; (2) the extent to which DHS has taken actions to harmonize efforts across critical infrastructure sectors; and (3) the extent to which DHS's SCO has taken actions to harmonize access control efforts across DHS. GAO examined six federally-administered access control efforts across three federal departments. Efforts were selected, among other things, to represent a range of efforts that groups of users—such as truck drivers—may encounter while accessing multiple facilities. GAO interviewed DHS, NRC, and DOD officials and users and operators affected by the efforts and reviewed relevant documents. The six selected federally-administered critical infrastructure access control efforts GAO reviewed generally followed similar screening and credentialing processes. Each of these efforts applies to a different type of infrastructure. For example, the Transportation Security Administration's Transportation Worker Identification Credential controls access to ports, the Department of Defense (DOD) Common Access Card controls access to military installations, and the Nuclear Regulatory Commission (NRC) regulates access to commercial nuclear power plants. GAO found that selected characteristics, such as whether a federal agency or another party has responsibility for vetting or what types of prior criminal offenses might disqualify applicants, varied across these access control efforts. In addition, these access control efforts generally affect two groups of stakeholders—users and operators—differently depending on their specific roles and interests. Users are individuals who require access to critical infrastructure as an essential function of their job; while, operators own or manage facilities, such as airports and chemical facilities. Regardless of infrastructure type, users and operators that GAO interviewed reported some common factors that can present challenges in their use of these access controls. For example, both users and operators reported that applicants requiring access to similar types of infrastructure or facilities may be required to submit the same background information multiple times, which can be costly and inefficient. The Department of Homeland Security (DHS) relies on partnership models to support collaboration efforts among federal and nonfederal critical infrastructure stakeholders, but has not taken actions to harmonize federally-administered access control efforts across critical infrastructure sectors. According to DHS officials, these partnerships have not explored harmonization of access control efforts across sectors, because this has not been raised as a key issue by the members and because DHS does not have a dedicated forum that would engage user groups in exploring these issues and identifying potential solutions. DHS's partnership models offer a mechanism by which DHS and its partners can explore the challenges users and operators may encounter and determine opportunities for harmonizing the screening and credentialing processes to address these challenges. DHS's Screening Coordination Office (SCO) has taken actions to support harmonization across DHS access control efforts, but it has not updated its goals and objectives to help guide progress toward the department's broader strategic framework for harmonization. SCO's strategic framework is based on two screening and credentialing policy documents—the 2006 Credentialing Initiative Report and 2008 Credentialing Framework Initiative. According to SCO officials, they continue to rely on these documents to provide their office with a high-level strategic approach, but GAO found that the goals and objectives outlined in the two documents are no longer current or relevant. In recent years, SCO has helped the department make progress toward its harmonization efforts by responding to and assisting with department-wide initiatives and DHS component needs, such as developing new programs or restructuring existing ones. However, without updated goals and objectives, SCO cannot ensure that it is best supporting DHS-wide screening and credentialing harmonization efforts. GAO recommends that (1) DHS work with partners to identify any opportunities to harmonize access control efforts across critical infrastructure sectors and (2) SCO establish goals and objectives to support its broader strategic framework for harmonization. DHS concurred with both recommendations. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The federal Food Stamp Program is intended to help low-income individuals and families obtain a more nutritious diet by supplementing their income with benefits to purchase eligible foods at authorized food retailers, such as meat, dairy products, fruits, and vegetables, but not items such as soap, tobacco, or alcohol. FNS pays the full cost of food stamp benefits and shares the states’ administrative costs—with FNS usually paying slightly less than 50 percent—and is responsible for promulgating program regulations and ensuring that state officials administer the program in compliance with program rules. The states administer the program by determining whether households meet the program’s eligibility requirements, calculating monthly benefits for qualified households, and issuing benefits to participants through an electronic benefits transfer system. In fiscal year 2005, the Food Stamp Program issued almost $28.6 billion in benefits to about 25.7 million individuals per month, and the maximum monthly food stamp benefit for a household of four living in the continental United States in fiscal year 2007 was $518. As shown in figure 1, program participation decreased during the late 1990s, partly due to an improved economy, but rose again from 2000 to 2005. The number of food stamp recipients follows the trend in the number of people living at or below the federal poverty level. In addition to the economic growth in the late 1990s, another factor contributing to the decrease in number of participants from 1996 to 2001 was the passage of the Personal Responsibility and Work Opportunity Reconciliation Act of 1996 (PRWORA), which added work requirements and time limits to cash assistance and made certain groups ineligible to receive food stamp benefits. In some cases, this caused participants to believe they were no longer eligible for food stamps when TANF benefits were ended. Since 2000, that downward trend has reversed, and experts believe that the downturn in the U.S. economy, coupled with changes in the Food Stamp Program’s rules and administration, has led to an increase in the number of food stamp participants. Eligibility for participation in the Food Stamp Program is based primarily on a household’s income and assets. To determine a household’s eligibility, a caseworker must first determine the household’s gross income, which cannot exceed 130 percent of the poverty level for that year as determined by the Department of Health and Human Services, and net income, which cannot exceed 100 percent of the poverty level (or about $1,799 per month for a family of three living in the continental United States in fiscal year 2007). Net income is determined by deducting from gross income a portion of expenses such as dependent care costs, medical expenses for elderly individuals, utilities costs, and housing expenses. The application process for the Food Stamp Program requires households to complete and submit an application to a local assistance office, participate in an interview, and submit documentation to verify household circumstances (see table 1). Applicants may need to make more than one visit to the assistance office to complete the application process. After eligibility is established, households are certified eligible for food stamps for periods ranging from 1 to 24 months, depending on household circumstances and state policy. While households are receiving benefits, they must report changes in household circumstances that may affect eligibility or benefit amounts. States may choose to require households to report changes within 10 days of occurrence (incident reporting) or at specified intervals (periodic reporting). States also have the option to adopt a simplified system, which further reduces the burden of periodic reporting by requiring households to report changes that happen during a certification period only when their income rises above 130 percent of the federal poverty level. Once the certification period ends, households must reapply for benefits, at which time eligibility and benefit levels are redetermined. The recertification process is similar to the application process. Households can be denied benefits or have their benefits end at any point during the process if they are determined ineligible under program rules or for procedural reasons, such as missing a scheduled interview or failing to provide the required documentation. While applying for and maintaining food stamp benefits has traditionally involved visiting a local assistance office, states have the flexibility to give households alternatives to visiting the office, such as using the mail, the telephone, and on-line services to complete the certification and recertification process. Alternative methods may be used to support other programs, such as Medicaid or TANF, since some food stamp participants receive benefits from multiple programs. Figure 2 illustrates a traditional office-based system and how states can use a number of alternative methods to determine applicants’ eligibility without requiring them to visit an assistance office. FNS and the states share responsibility for implementing a quality control system used to measure the accuracy of caseworker decisions concerning the amount of food stamp benefits households are eligible to receive and decisions to deny or end benefits. The food stamp payment error rate is calculated by FNS for the entire program, as well as every state, by adding overpayments (including payments higher than the amounts households are eligible for or payments to those who are not eligible for any benefit), and underpayments (payments lower than the amounts households are eligible for). The national payment error rate has declined by about 40 percent between 1999 and 2005, from 9.86 percent to a record low of 5.84 percent. FSP payment errors are caused primarily by caseworkers, usually when they fail to keep up with new information, and by participants when they fail to report needed information. Another type of error measured by FNS is the negative error rate, defined as the rate of cases denied, suspended, or terminated incorrectly. An example of incorrectly denying a case would be if a caseworker denied a household participation in the program because of excess income, but there was a calculation error and the household was actually eligible for benefits. FNS also monitors individual fraud and retailer trafficking of food stamp benefits. According to our survey, almost all states allow households to submit applications, report changes, and submit recertifications through the mail, and 26 states have implemented or are developing systems to allow households to perform these tasks on-line. Almost half of the states are using or developing call centers and states are also using flexibility authorized by FNS to increase use of the telephone as an alternative to visiting the local assistance office. States have taken a variety of actions to help households use on-line services and call centers, such as sending informational mailings, holding community meetings, and using community partners to assist households. Many states are allowing households to apply for food stamp benefits, report changes in household circumstances, and complete recertification through the mail and on-line. Mail-In Procedures. Results of our survey show that households can submit applications through the mail in all states, report changes through the mail in all but 1 state, and submit recertifications through the mail in 46 states. For example, Washington state officials told us that the recertification process involves mailing a recertification application package to households that they can mail back without visiting a local assistance office. On-line Services. All states we surveyed reported having a food stamp application available for households to download from a state website, as required by federal law, and 26 states (51 percent) have implemented or are developing Web-based systems in which households can submit initial applications, report changes, or submit recertifications on line (see fig. 3). Most on-line applications were made available statewide and implemented within the last 3 years and states developing on-line services plan to implement these services within the next 2 years. All of the 14 states that reported currently providing on- line services allow households to submit initial food stamp applications on-line, but only 6 states allow households to report changes and 5 states allow households to complete recertification on- line. Of the 14 states that reported using on-line applications, 2 reported they were only available in certain areas of the state. Only two states (Florida and Kansas) reported in our survey that the state closed program offices or reduced staff as a result of implementing on-line services. On-line services available (14) Not using on-line services (25) Many states are using call centers, telephone interviews, or other technologies to help households access food stamp benefits or information without visiting a local assistance office. Call Centers. Nineteen states (37 percent) have made call centers available to households and an additional 4 states (8 percent) have begun development of call centers that will be available to households in 2007 (see fig. 4). Households have been able to use call centers in seven states for more than 3 years. Of the 19 states using call centers, 10 reported that call centers were only available in certain areas of the state. Only two states (Texas and Idaho) reported using private contractors to operate the call centers, but Texas announced in March 2007 that it was terminating its agreement with the private contractor (see fig. 10 for more details). FNS officials told us that the Idaho private call center provides general food stamp program information to callers, while inquiries about specific cases are transferred to state caseworkers. Indiana reported in our survey that the state plans to pilot call centers in certain areas of the state in August 2007 using a private contractor and complete a statewide transition in March 2008. Only two states (Florida and Arizona) reported in our survey that the state closed offices or reduced staff as a result of implementing call centers. Most states with call centers reported that households can use them to report changes in household circumstances, request a food stamp application and receive assistance filling it out, receive information about their case, or receive referrals to other programs. Only four states reported using their call centers to conduct telephone interviews. For example, local officials in Washington told us that households use their call center primarily to request information, report changes in household circumstances, and request an interview. Telephone interviews are conducted by caseworkers in the local assistance office. Telephone Interviews. Many states are using the flexibility provided by FNS to increase the use of the telephone as an alternative to households visiting the local assistance office. For example, FNS has approved administrative waivers for 20 states that allow states to substitute a telephone interview for the face-to-face interview for all households at recertification without documenting that visiting the assistance office would be a hardship for the household. In addition to making it easier on households, this flexibility can reduce the administrative burden on the state to document hardship. FNS also allows certain states implementing demonstration projects to waive the interview requirement altogether for certain households. States we reviewed varied in terms of the proportion of interviews conducted over the phone. For example, Florida state and local officials estimated that about 90 percent of the interviews conducted in the state are completed over the telephone. Washington state officials estimated that 10 percent of application interviews and 30 percent of recertification interviews are conducted by phone. Table 2 describes the types of flexibility available to states and how many are taking advantage of each. Other Technologies. Some states reported implementing other technologies that support program access. Specifically, according to our survey, 11 states (21 percent) have implemented an Integrated Voice Response (IVR) system, a telephone system that provides automated information, such as case status or the benefit amount, to callers but does not direct the caller to a live person. In addition, 11 states (21 percent) are using document management/imaging systems that allow case records to be maintained electronically rather than in paper files. All five of the states we reviewed have implemented in at least certain areas of their state mail-in procedures, on-line services, call centers, waiver of face-to-face interview at recertification, and document management/imaging systems. Three of the five states (Florida, Texas, and Washington) have implemented an integrated voice response system and two (Florida and Utah) have implemented a waiver of the face-to-face interview at initial application. States have taken a variety of actions to help households use on-line services and call centers, such as sending informational mailings, holding community meetings, and employing call center staff who speak languages other than English as shown in figures 5 and 6. States are using community-based organizations, such as food banks, to help households use alternative methods. All states implementing on-line services (14) and about half of states with call centers (10 of 19) use community partners to provide direct assistance to households. Among the states we reviewed, four provide grants to community-based organizations to inform households about the program and help them complete the application process. For example, Florida closed a third of its local assistance offices and has developed a network of community partners across the state to help households access food stamps. Florida state officials said that 86 percent of the community partners offer at least telephone and on-line access for completing and submitting food stamp applications. Community partner representatives in Washington, Texas, and Pennsylvania said that they sometimes call the call center with the household or on their behalf to resolve issues. Pennsylvania provides grants to community partners to help clients use the state’s on-line services. In addition to the assistance provided by community-based organizations, H&R Block, a private tax preparation firm, is piloting a project with the state of Kansas where tax preparers who see that a household’s financial situation may entitle them to food stamp benefits can electronically submit an application for food stamps at no extra charge to the household. Insufficient information is available to determine the results of using alternative methods to access the Food Stamp Program, but state and federal officials report that alternative methods are helping some households. Few evaluations have been conducted that identify the effect of alternative methods on food stamp program access, decision accuracy, or administrative costs. Although states monitor the implementation of alternative methods, isolating the effects of specific methods is difficult, in part because states typically have implemented a combination of methods over time. Despite the limited information on the effectiveness of alternative methods, federal and state officials believe that these methods can help many households by making it easier for them to complete the application or recertification process. However, technology and staffing challenges can hinder the use of these methods. Few federal or state evaluations have been conducted to identify how using alternative methods, such as on-line applications or call centers, affects access to the Food Stamp Program, the accuracy of caseworker decisions about eligibility and benefit amounts, or administrative costs. Few evaluations have been conducted in part because evaluating the effectiveness of alternative methods is challenging, given that limited data are available, states are using a combination of methods, and studies can be costly to conduct. FNS and ERS have funded studies related to improving Food Stamp Program access, but none of these previous studies provide a conclusive assessment of the effectiveness of alternative methods and the factors that contribute to their success (see app. I for a list of the studies we selected and reviewed). Although these studies aimed to evaluate local office practices, grants, and demonstration projects, the methodological limitations of this research prevent assessments about the effectiveness of these efforts. An evaluation of the Elderly Nutrition Demonstration projects used a pre-post comparison group design to estimate the impact of the projects and found that food stamp participation among the elderly can be increased. Two of the projects evaluated focused on making the application process easier by providing application assistance and simplifying the process, in part by waiving the interview requirement. However, one of the drawbacks of this study is that its findings are based on a small number of demonstrations, which affects the generalizability of the findings. Two related FNS-funded evaluations are also under way, but it is unlikely these studies will identify the effects of using alternative methods. An implementation study of Florida’s efforts to modernize its system using call centers and on-line services involves a descriptive case study to be published in late summer 2007, incorporating both qualitative and quantitative data. The objectives of the study are to: describe changes to food stamp policies and procedures that have been made in support of modernization; identify how technology is used to support the range of food stamp eligibility determination and case management functions; and describe the experiences of food stamp participants, eligible non-participants, state food stamp staff, vendors, and community partners. This study will describe Florida’s Food Stamp Program performance over time in comparison to the nation, other states in the region, and other large states. Performance data that will be reviewed includes program participation in general and by subgroup, timeliness of application processing, payment error rates, and administrative costs. However, the study will not isolate the effect of the modernization efforts on program performance. A national study of state efforts to enhance food stamp certification and modernize the food stamp program involves a state survey and case studies of 14 states and will result in a site visit report in late summer 2007, a comprehensive report in March 2009, and a public-use database systematically describing modernization efforts across all the states in May 2009. The national study will focus on four types of modernization efforts: policy changes to modernize FSP application, case management, and recertification procedures; reengineering of administrative functions; increased or enhanced use of technology; and partnering arrangements with businesses and nonprofit organizations. The goals of the study include documenting outcomes associated with food stamp modernization and examining the effect of these modernization efforts on four types of outcomes: program access, administrative cost, program integrity, and customer services. This study will compare performance data from the case study states with data from similar states and the nation as a whole, however, this analysis will not determine whether certain modernization efforts caused changes in performance. USDA has also awarded $5 million in fiscal year 2006 to 5 grantees in Virginia, California, Georgia and Alabama to help increase access to the program, but there is currently no plan to publish an evaluation of the outcomes of these projects. The participation grants focus on efforts to simplify the application process and eligibility systems and each grantee plans to implement strategies to improve customer service by allowing Web-based applications and developing application sites outside the traditional social services office. Grantees are required to submit quarterly progress reports and final reports including a description of project activities and implementation issues. Although few evaluations have been conducted, FNS monitors state and local offices and tracks state implementation of alternative methods to improve program access. FNS also collects and monitors data from states, such as the number of participants, amount of benefits issued, participation rates overall and by subgroup, timeliness of application processing, payment errors, negative errors, and administrative costs. FNS regional offices conduct program access reviews of selected local offices in all states to determine whether state and/or local policies and procedures served to discourage households from applying for food stamps or whether local offices had adopted practices to improve customer service. FNS also monitors major changes to food stamp systems using a process where FNS officials review and approve plans submitted by states related to system development and implementation, including major upgrades. States like Texas, Florida, and Indiana that have implemented major changes to their food stamp system, such as moving from a local assistance office service delivery model to call centers and on- line services, have worked with FNS through this process. Figure 7 describes FNS’s monitoring of Indiana’s plan to implement alternative access methods. FNS has also encouraged states to share information about their efforts to increase access among states, but states reported needing additional opportunities to share information. FNS has funded national and regional conferences, travel by state officials to visit other states to learn about their practices, as well as provided states a guide to promising practices for improving program access. The guide contains information about the goal of each practice, the number of places where the practice is in use, and contact information for a person in these offices. However, this guide has not been updated since 2002 and, for the most part, does not include any evidence that these efforts were successful or any lessons that were learned from these or other efforts. In 2004, in response to recommendations from our prior report, FNS compiled and posted 19 practices aimed to improve access from 11 states. FNS also has a form available on its website where states can submit promising practices to improve access, but to date, practices from this effort have not been published. In our survey, 13 states (about 25 percent) reported needing additional conferences or meetings with other states to share information. States also report monitoring use of alternative methods in the Food Stamp Program, but have not conducted evaluations of their effectiveness. In our survey, states reported monitoring several aspects of the performance of on-line services. As shown in figure 8, states most commonly used the number of applications submitted, the number of applications terminated before completion, and customer satisfaction to monitor the performance of on-line services. For example, Pennsylvania state officials monitor performance of their on-line system and meet regularly with community partners that help households submit applications for benefits to obtain feedback on how they can improve the system. Florida state officials told us they use responses to on-line feedback surveys submitted at the end of the on-line application to assess customer satisfaction with the state’s on-line services. States also reported in our survey monitoring several aspects of the performance of their call centers. As shown in figure 9, most states with call centers reported monitoring the volume of transactions and calls to the center, customer satisfaction, the rate of abandoned calls, and the length of time callers are on hold before speaking with a caseworker. For example, Utah officials monitor several measures and added additional staff to the call center after observing increased hold times when they were implementing the call center to serve the Salt Lake City area. In addition, Washington state officials told us that they monitor call centers on an hourly basis, allowing call center managers to quickly increase the number of staff answering phones as call volumes increase. Despite these monitoring efforts, no states reported conducting an evaluation of the effectiveness of on-line services in our survey and only one state reported conducting such an evaluation of its call centers. The report Illinois provided on its call center described customer and worker feedback on the performance of the call center, but did not provide a conclusive assessment of its effectiveness. Seven states implementing Combined Application Projects (CAP) have submitted reports to FNS including data on the number of participants in the CAP project compared with when the project began, but do not use methods to isolate the effect of the project or determine whether participation by SSI recipients would have increased in the absence of the project. Two of the five states we reviewed said they planned to conduct reviews of their system. For example, Washington is conducting an internal quality improvement review of its call centers. It will compare call center operations with industry best practices and promising new technologies, and will identify the costs, services offered, and best practices used by the call centers. Few evaluations have been conducted, in part because evaluating the effectiveness of alternative methods is challenging. For example, states are limited in their ability to determine whether certain groups of households are able to use alternative methods because few states collect demographic information on households that use their on-line services and call centers. Only six states reported in our survey that they collect demographic information on the households that use on-line services and four states reported collecting demographic information on the households that use call centers. In addition, although FNS is requiring states with waivers to the face-to-face interview to track the payment accuracy of cases covered by these waivers, FNS has not yet assessed the effects of these methods on decision accuracy because it has not collected enough years of data to conduct reliable analyses of trends. Further, evaluations that isolate the effect of specific methods can be challenging because states implement methods at different times and are using a combination of methods. For example, Washington state implemented call centers in 2000, an on-line application and CAP in 2001, and document imaging and a waiver of the face-to-face interview at recertification in 2003. Sophisticated methodologies often are required to isolate the effects of certain practices or technologies. These studies can be costly to conduct because the data collection and analysis can take years to complete. For example, the two studies that we reviewed that aimed to isolate the effects of specific projects each cost over $1 million and were conducted over more than 3 years. Although evaluating the effects of alternative methods is challenging, FNS is collecting data from states through the waiver process that could be analyzed and previous ERS- funded studies have used methodologies that enable researchers to identify the effect of certain projects or practices on program access. Despite the limited information on the effects of alternative methods, federal and state officials report that alternative methods, such as the availability of telephone interviews, can help many types of households by making it easier for them to complete the food stamp application or recertification process. Some state and local officials and community partners noted, however, that certain types of households may have difficulty using some methods. Moreover, some officials also described how technology and staffing challenges can hinder the use of these methods. According to federal and state officials we interviewed, alternative methods can help households in several ways, such as increasing flexibility, making it easier to receive case information or report changes to household circumstances, or increasing efficiency of application processing. In addition, community partner representatives from some states we reviewed said that the availability of telephone interviews helps reduce the stigma of applying for food stamp benefits caused by visiting an assistance office. Increased flexibility. Federal officials from the seven FNS regional offices said that alternative methods help households by reducing the number of visits a household makes to an assistance office or by providing additional ways to comply with program requirements. Moreover, all of the states in our survey that currently have on-line services and more than half of the states that currently operate call centers, reported that reducing the number of visits an individual must make to an office as a reason for implementing the alternative methods. For example, in Florida a household may submit an application or recertification through any one of the following access points -- on-line, mail, fax, community partner site, or in-person at the local assistance office. Additionally, in certain areas of Texas, it is possible for households to apply for food stamps without ever visiting a local assistance office because the state has made available phone interviews and on-line services. Reducing the number of required visits can be helpful for all households, according to state officials or community partner representatives in two of the states we reviewed. Easier access to case information and ability to report changes. According to officials in the five states we reviewed, alternative methods, such as call centers, automated voice response systems, or electronic case records, make it easier for households to access information about their benefits and report changes to household circumstances. For example, in Washington, a household may call the automated voice response system 24 hours a day, 7 days a week to immediately access case information, such as appointment times or whether their application has been received or is being processed. If the household has additional questions, they can call the call center where a call center agent can view their electronic case record and provide information on the status of their application, make decisions based on changes in household circumstances reported to them, inform them of what verification documents are needed or have been received, or perform other services. Increased efficiency. State or local officials from four of the states we reviewed said that implementation of document management/imaging systems improves application processing times, while local officials in two of the states said that call centers help caseworkers complete tasks more quickly. Furthermore, about half of the states in our survey that have call centers reported that increasing timeliness of application processing and reducing administrative costs were reasons for implementing them. State officials in Florida said that the document management/imaging system allows a caseworker to retrieve an electronic case record in seconds compared to retrieving paper case files that previously took up to 24 hours, allowing caseworkers to make eligibility decisions more quickly on a case. Additionally, a call center agent can process a change in household circumstances instantly while on the phone. Caseworkers in Pennsylvania said that implementation of a change reporting call center has reduced the number of calls to caseworkers at the local assistance office, which allows them to focus on interviewing households and processing applications more quickly. Officials from four states we reviewed also said that use of a document management/imaging system has resulted in fewer lost documents, which can reduce the burden on households of having to resubmit information. According to some of the state officials and community partners we interviewed, the availability of alternative methods can be especially beneficial for working families or the elderly because it reduces barriers from transportation, child care or work responsibilities. For example, state officials in Florida explained that a working individual can complete a phone interview during their lunch break without taking time off of work to wait in line at the assistance office. In addition, state officials from three of the states we reviewed that have implemented CAP projects told us that they had experienced an increase in participation among SSI recipients and FNS and officials from two states said that households benefited from the simplified application process. In addition, state officials in Florida said that on-line services help elderly households that have designated representatives to complete the application on their behalf. For example, an elderly individual’s adult child who is the appointed designated representative but lives out-of-state can apply and recertify for food stamp benefits for their parent without traveling to Florida. However, some state and local officials and community partners we interviewed said certain types of households may have difficulty using certain alternative methods. For example, community partner representatives in two states that we reviewed said that those with limited English proficiency, elderly, immigrants, or those with mental disabilities may have difficulty using on-line applications. Local officials from Philadelphia said that the elderly and households with very low incomes may have trouble accessing computers to use on-line services and may not have someone helping them. A community partner in Florida told us that sometimes the elderly, illiterate, or those with limited English proficiency need a staff person to help them complete the on-line application. In addition, those with limited English proficiency, elderly, or those with mental disabilities may have difficulty navigating the call center phone system, according to officials from two states and community partners from another state that we reviewed. A community partner representative in Texas said that sometimes he calls the call center on behalf of the applicant because a household may have experienced difficulty or frustration in navigating the phone system. Although officials told us that alternative methods are helpful to many households, challenges from inadequate technology or staffing may limit the advantages of alternative methods. For example, state officials from Texas explained that on-line applications without electronic signature capability have limited benefit because households are still required to submit an actual signature through mail, fax, or in person after completing the on-line application. Texas state officials and community partner representatives told us that the lack of this capability limited its use and benefit to households. By contrast, Florida’s application has electronic signature capability and Florida officials reported that, as of December 2006, about 93 percent of their applications are submitted on-line. Call centers that do not have access to electronic records may not be as effective at answering callers’ questions. Officials from Washington state and federal officials from an FNS regional office view the use of a document management/imaging system as a vital part of the call center system. Florida advocates said that households have received wrong information from call center agents and attribute the complaints in part to call center agents not having access to real-time electronic case records. Florida recently expanded its document imaging system statewide, which they believe will help address these concerns. Further, while four of the five states we reviewed implemented alternative methods in part to better manage increasing numbers of participants with reduced numbers of staff, the staffing challenges certain states experienced also limited the advantages of alternative methods. For example, inadequate numbers of staff or unskilled call center staff may reduce the level of service provided and limit the advantages to households of having a call center available to them. Texas and Florida have experienced significant staff reductions at a time of increased participation, which has affected implementation of alternative methods (see figs. 10 and 11). While some states face challenges implementing alternative methods, Utah state officials said that they have successful call centers because they have implemented technology incrementally over time and because they use state caseworkers experienced in program rules. Utah state officials also reported having relatively low caseloads (180 per worker) compared with Texas (815 per worker, in 2005). To maintain program integrity while implementing alternative methods for applying and recertifying for food stamps, officials from the states we reviewed reported using a variety of strategies, some of which were in place long before implementation of the alternative access methods. Some states used finger imaging, electronic signatures, and special verification techniques to validate the identity of households using call centers or on- line services. In addition, states use databases to verify information provided by households and to follow up on discrepancies between information reported by the household and information obtained from other sources. Officials in the five states we reviewed did not believe that the use of alternative methods had increased fraud in the program. Further, despite concern that a lack of face-to-face interaction with caseworkers would lead to more households being denied benefits for procedural reasons, such as missing a scheduled interview, our limited analysis indicated no considerable fluctuations in the rate of procedural denials and officials from the states we reviewed reported taking actions to prevent them. Some states have taken several actions to prevent improper food stamp payments and fraud while implementing alternative methods. Nationally, states have systems in place to protect program integrity and the states we reviewed described how they prevent improper payments and fraud as they implement alternative access methods. Finger imaging. Nationwide, four states currently use finger imaging of food stamp applicants to prevent households from applying more than once for benefits. FNS officials commented that the agency has not concluded that finger imaging enhances program integrity and that it may have a negative effect on program access by deterring certain households from applying. Electronic signatures. FNS reported in October 2006 that nine states use electronic signatures to validate the identity of on-line users of their systems. For example, Florida’s on-line application asks applicants to click a button signifying that they are signing the application. Of the states we reviewed, Pennsylvania, Florida, and Washington have on-line services with electronic signatures. In-depth interview for high-risk cases. In Florida, a case that is considered to have a greater potential for error or fraud is flagged as a “red track” case, and it receives an in-depth interview to more fully explore eligibility factors. FNS officials commented that Florida uses an abbreviated interview with most households and that their in-depth interview for red track cases may be equivalent to the standard interview process used in other states. Special training for call center agents. Call center agents in the five states we reviewed are trained to verify callers’ identities by asking for specific personal information available in the file or in the states’ records. Pennsylvania has developed specialized interview training, including a video, for eligibility workers on conducting telephone interviews of households applying or recertifying for the Food Stamp Program. One element of the training is how to detect misinformation being provided by a household. For example, if records indicate that a household member is currently incarcerated and benefits are being claimed for that person, call center agents are trained to probe for additional information. Similarly, Utah trains telephone interviewers to request more information if needed to clarify discrepancies in the case, such as a household reporting rent payments too high to be covered by the household’s reported income. Data matching. States have used data matching systems for many years and all five states we reviewed used software either developed by the state or obtained through a third-party vendor to help with verification of household circumstances. For example, data matching software can match state food stamp caseloads against wage reporting systems and other databases to identify unreported household income and assets. Utah and Washington have developed software that automatically compares information provided by applicants and recipients with information contained in state databases, such as income and employment information. State officials told us that using this software greatly reduces the burden on caseworkers, who would otherwise have to search multiple databases one at a time. In addition to requiring case workers to access state and federal data sources to verify information, Texas contracts with a private data vendor to obtain financial and other background information on food stamp applicants and recipients. After a household has started receiving benefits, states conduct additional data matching, and their systems generate a notice to the caseworker if there is a conflict between what the household reported and information obtained from another source. The information in these notices is investigated to ensure that recipients receive the proper level of benefits. Finally, about half of all states participate in the voluntary quarterly matching of their food stamp rolls with those of other states to detect individuals receiving food stamp benefits in more than one state at a time. Food stamp officials in four of the states we reviewed said that they did not believe the use of alternative methods has increased the frequency of fraud and abuse in the program and officials in one state were unsure and collecting data to help determine whether the frequency of fraud had increased. Texas caseworkers, for example, told us they did not think telephone interviews increased fraud because they believed the verification conducted by caseworkers and the states’ data matching system was sufficient. However, we have previously reported on the risk of improper payments and fraud in the food stamp program and since there is always risk of fraud and improper payments, particularly given the high volume of cases and the complexity of the program, it is important that states include additional controls when changing their processes and that states continually assess the adequacy of those controls for preventing fraud. Some program experts have expressed concern that households would be denied for procedural reasons more frequently if they had less face-to-face interaction with caseworkers, although data have not borne out these concerns and states are taking actions to limit procedural denials. During our site visits, some officials reported examples of procedural denials resulting from alternative methods. For example, community group representatives in Florida said that some households were denied benefits because they could not get through to a call center agent to provide required verification in time. However, they also acknowledged that procedural denials due to not providing verification were frequent prior to the state implementing these methods. In addition, Texas officials said that some households were denied benefits for missing scheduled interviews when the private contractor was late in mailing notices of the interview appointments. Our limited analysis of FNS data for the five states we reviewed found no considerable fluctuations in the rate of procedural denials between fiscal years 2000 and 2005. However, a household’s failure to provide verification documents was the most common procedural reason for denial, suspension, or termination of benefits in the five states we reviewed. States we visited described their efforts to help households use alternative methods and prevent procedural denials for households that are not seen in person by case workers. Examples of actions the states we reviewed took to prevent procedural denials include: reviewing actions taken for cases that are denied, training caseworkers on preventing improper denials, routinely correcting addresses from returned mail, and developing automated system changes to prevent caseworkers from prematurely denying a case. For example, Utah trains its caseworkers to inform households of all deadlines, and their application tracking software automatically generates a list of households that have not scheduled an interview. This list is used by caseworkers to send notices to the households. Washington uses its document imaging center staff to process case actions associated with returned mail, including quickly correcting addresses. Over the last several years and for a variety of reasons, many states have changed their food stamp certification and recertification processes to enable households to make fewer visits to the local assistance office. Given our findings, it is important for states to consider the needs of all types of households when developing alternative ways of accessing food stamp benefits. Despite making major changes in their systems, FNS and the states have little information on the effects of the alternative methods on the Food Stamp Program, including what factors contribute to successful implementation, whether these methods are improving access to benefits for target groups, and how best to ensure program integrity. Without up-to-date information about what methods states are using and the factors that contribute to successful implementation of alternative methods, states and the federal government most likely will continue to invest in large-scale changes to their certification and recertification processes without knowing what works and in what contexts. Although FNS is beginning to study state efforts in this regard, these studies are not designed to systematically evaluate whether specific methods contributed to achieving positive outcomes. In addition, FNS has not thoroughly analyzed the data received from states implementing waivers of the face- to-face interview to determine, for example, whether it should allow states to use telephone interviews in lieu of face-to-face interviews for all types of households without a waiver. Further, while FNS is using its Website to disseminate information about promising practices, the information available is not up-to-date, making it difficult to easily locate current information about specific practices. Enhancing the research, collection and dissemination of promising practices could be an important resource for states that want to provide households effective alternatives to visiting local assistance offices to receive food stamp benefits. To improve USDA’s ability to assess the effectiveness of its funded efforts, we recommend that the Secretary of Agriculture take the following actions: direct FNS and the Economic Research Service to work together to enhance their research agendas to include projects that would complement ongoing research efforts and determine the effect of alternative methods on program access, decision accuracy, and administrative costs. Such projects would reliably identify the alternative methods that are effective and the factors that contribute to their success; and direct FNS to conduct analyses of data received from states implementing waivers or demonstration projects waiving the face-to- face interview and require states implementing waivers or demonstration projects to collect and report data that would facilitate such analyses. Such analyses would identify the effect of the waivers on outcomes such as payment accuracy and could help determine whether the use of the waiver should be further expanded or inform whether regulations should be changed to allow telephone interviews for all households without documenting hardship. In addition, we recommend that the Secretary of Agriculture help states implement alternative methods to provide access to the Food Stamp Program by directing FNS to disseminate and regularly update information on practices states are using to implement alternative access methods to the traditional application and recertification process. The information would not be merely a listing of practices attempted, but would include details on what factors or contexts seemed to make a particular practice successful and what factors may have reduced its effectiveness. We provided a draft of this report to the U.S. Department of Agriculture for review and comment. We met with FNS and ERS officials on April 16, 2007, to obtain their comments. In general, the officials agreed with our findings, conclusions, and recommendations. They discussed the complexity and variability of state modernization efforts and the related challenges of researching the effects of these efforts. For example, policy changes, organizational restructuring, and the engagement of community organizations in the application process may occur simultaneously with implementation of alternative methods and play a significant role in state and client experiences. Having multiple interrelated factors creates challenges for researching the effects of modernization efforts. Nonetheless, the officials highlighted steps the agency is taking to monitor and evaluate state implementation of alternative access methods. First, the officials commented that as modernization evolves, FNS is using its administrative reporting system to consistently and routinely track changes in state program performance in the areas of application timeliness, food stamp participation by subgroups, payment accuracy, and administrative costs. Second, they stated that the two related FNS-funded studies currently underway will be comparing performance data from the case study states with data from similar states; however, this analysis will not determine whether certain modernization efforts caused changes in performance. Third, they stated that FNS plans to analyze data they are collecting from states as part of the administrative waiver process to determine the effect of telephone interviews on payment accuracy. Finally, ERS officials noted that Food Stamp Program access is an area in which the agency continues to solicit research from the private sector as well as other government agencies and that ERS makes data available to support these research efforts. FNS and ERS also provided us with technical comments, which we incorporated where appropriate. We are sending copies of this report to the Secretary of Agriculture, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. To understand what alternatives states are using to improve program access and what is known about the results of using these methods, we examined: (1) what alternative methods to the traditional application and recertification process are states using to increase program access; (2) what is known about the results of these methods, particularly on program access for target groups, decision accuracy, and administrative costs; and (3) what actions have states taken to maintain program integrity while implementing alternative methods. To address these issues, we surveyed food stamp administrators in the 50 states and the District of Columbia, conducted four state site visits (Florida, Texas, Utah, and Washington) and one set of semi-structured telephone interviews (Pennsylvania), analyzed data provided by the Food and Nutrition Service (FNS) and the selected states, reviewed relevant studies, and held discussions with program stakeholders, including officials at FNS headquarters and regional offices, and representatives of advocacy organizations. We performed our work from September 2006 to March 2007 in accordance with generally accepted government auditing standards. To learn about state-level use of alternative methods to help households access the Food Stamp Program, we conducted a Web-based survey of food stamp administrators in the 50 states and the District of Columbia. The survey was conducted between December 2006 and February 2007 with 100 percent of state food stamp administrators responding. The survey included questions about the use of alternative methods to provide access to the program, including mail-in procedures, call centers, on-line services, and other technologies that support program access. In addition, we asked about the reasons for implementing these methods, whether states had conducted evaluations of the methods, what measures states used to evaluate the performance of the methods, and additional assistance needed from FNS. Because this was not a sample survey, there are no sampling errors. However, the practical difficulties of conducting any survey may introduce nonsampling errors, such as variations in how respondents interpret questions and their willingness to offer accurate responses. We took steps to minimize nonsampling errors, including pre-testing draft instruments and using a Web-based administration system. Specifically, during survey development, we pre-tested draft instruments with officials in Washington, Arizona, Utah, and Wisconsin in October and November 2006. In the pre- tests, we were generally interested in the clarity of the questions and the flow and layout of the survey. For example, we wanted to ensure definitions used in the surveys were clear and known to the respondents, categories provided in closed-ended questions were complete and exclusive, and the ordering of survey sections and the questions within each section was appropriate. We also used in-depth interviewing techniques to evaluate the answers of pretest participants, and interviewers judged that all the respondents’ answers to the questions were based on reliable information. On the basis of the pre-tests, the Web instrument underwent some slight revisions. A second step we took to minimize nonsampling errors was using a Web-based survey. By allowing respondents to enter their responses directly into an electronic instrument, this method automatically created a record for each respondent in a data file and eliminated the need for and the errors (and costs) associated with a manual data entry process. To further minimize errors, programs used to analyze the survey data were independently verified to ensure the accuracy of this work. After the survey was closed, we made comparisons between select items from our survey data and other national-level data. We found our survey data were reasonably consistent with the other data set. On the basis of our comparisons, we believe our survey data are sufficient for the purposes of our work. We conducted four site visits (Florida, Texas, Utah, and Washington) and one set of semi-structured telephone interviews (Pennsylvania). We selected states that have at least one FNS-approved waiver of the face-to- face interview requirement and reflect some variation in state participation rates. We also considered recommendations from FNS officials, advocacy group representatives, or researchers. We made in- depth reviews for each state we selected. We interviewed state officials administering and developing policy for the Food Stamp Program, local officials in the assistance offices and call centers where services are provided, and representatives from community-based organizations that provide food assistance. To supplement the information gathered through our site visits and in- depth reviews, we analyzed data provided by FNS for the states we reviewed. These analyses allowed us to include state trends for specific measures (Program Access Index, monthly participation, payment accuracy, administrative costs, and reasons for benefit denials) in our interviews with officials. To review the reasons for benefit denials, we used FNS’s quality control (QC) system data of negative cases used in error rate calculations. Specifically, we looked at the number and percentage of cases denied, terminated, or suspended by the recorded reason for the action in the five states we reviewed for fiscal years 2000 through 2005. Though our data allowed us to examine patterns in these areas before and after a method was implemented, we did not intend to make any statements about the effectiveness of methods implemented in the states we visited and reviewed. Instead, we were interested in gaining some insight through our interviews on how alternative methods may have affected state trends. Based on discussions with and documentation obtained from FNS officials, and interviews with FNS staff during site visits, we determined that these data are sufficiently reliable for our limited review of state trends. In addition, we selected and reviewed several studies and reports that relate to the use of alternative methods to increase food stamp program access. These studies included food stamp participation outcome evaluations that were funded by FNS and the Economic Research Service (ERS) and focused on practices aimed to improve access to the Food Stamp Program. To identify the selected studies, we conducted library and Internet searches for research published on food stamp program access since 1990, interviewed agency officials to identify completed and ongoing studies on program access, and reviewed bibliographies that focused on program access concerns. For each selected study, we determined whether the study’s findings were generally reliable. Two GAO social science analysts evaluated the methodological soundness of the studies, and the validity of the results and conclusions that were drawn. The studies we selected and reviewed include: U.S. Department of Agriculture, Economic Research Service, Food Stamp Program Access Study: Final Report, by Bartlett, S., N. Burstein, and W. Hamilton, Abt Associates Inc. (Washington, D.C.: November 2004). U.S. Department of Agriculture, Economic Research Service, Evaluation of the USDA Elderly Nutrition Demonstrations, by Cody, S. and J. Ohls, Mathematica Policy Research, Inc. (Washington, D.C.: May 2005). U.S. Department of Agriculture, Food and Nutrition Services, Office of Analysis, Nutrition and Evaluation, Evaluation of Food Stamp Research Grants to Improve Access Through New Technology and Partnerships, by Sheila Zedlewski, David Wittenburg, Carolyn O’Brien, Robin Koralek, Sandra Nelson, and Gretchen Rowe. (Alexandria, Va.: September 2005). U.S. Department of Agriculture, Food and Consumer Service, Evaluation of SSI/FSP Joint Processing Alternatives Demonstration, by Carol Boussy, Russell H. Jackson, and Nancy Wemmerus. (Alexandria, Va: January 2000. Combined Application Project Evaluations submitted to FNS by seven states: Florida, Massachusetts, Mississippi, North Carolina, South Carolina, Texas, and Washington. Heather McCallum Hahn, Assistant Director, Cathy Roark, Analyst-in- Charge, Kevin Jackson, Alison Martin, Daniel Schwimer, Gretchen Snoey, Rachael Valliere and Jill Yost made significant contributions to this report. Food Stamp Program: FNS Could Improve Guidance and Monitoring to Help Ensure Appropriate Use of Noncash Categorical Eligibility. GAO-07-465. Washington, D.C.: March 28, 2007. Food Stamp Program: Payment Errors and Trafficking Have Declined despite Increased Program Participation. GAO-07-422T. January 31, 2007. Food Stamp Trafficking: FNS Could Enhance Program Integrity by Better Targeting Stores Likely to Traffic and Increasing Penalties. GAO-07-53. Washington, D.C.: October 13, 2006. Improper Payments: Federal and State Coordination Needed to Report National Improper Payment Estimates on Federal Programs. GAO-06-347. Washington, D.C.: April 14, 2006. Food Stamp Program: States Have Made Progress Reducing Payment Errors, and Further Challenges Remain. GAO-05-245. Washington, D.C.: May 5, 2005. Food Stamp Program: Farm Bill Options Ease Administrative Burden, but Opportunities Exist to Streamline Participant Reporting Rules among Programs. GAO-04-916. Washington, D.C.: September 16, 2004. Food Stamp Program: Steps Have Been Taken to Increase Participation of Working Families, but Better Tracking of Efforts Is Needed. GAO-04-346. Washington, D.C.: March 5, 2004. Financial Management: Coordinated Approach Needed to Address the Government’s Improper Payments Problems. GAO-02-749. Washington, D.C.: August 9, 2002. Food Stamp Program: States’ Use of Options and Waivers to Improve Program Administration and Promote Access. GAO-02-409. Washington, D.C.: February 22, 2002. Executive Guide: Strategies to Manage Improper Payments: Learning from Public and Private Sector Organizations. GAO-02-69G. Washington, D.C.: October 2001. Food Stamp Program: States Seek to Reduce Payment Errors and Program Complexity. GAO-01-272. Washington D.C.: January 19, 2001. Food Stamp Program: Better Use of Electronic Data Could Result in Disqualifying More Recipients Who Traffic Benefits. GAO/RCED-00-61. Washington D.C.: March 7, 2000. Food Assistance: Reducing the Trafficking of Food Stamp Benefits. GAO/T-RCED-00-250. Washington D.C.: July 19, 2000. Food Stamp Program: Information on Trafficking Food Stamp Benefits. GAO/RCED-98-77. Washington D.C.: March 26, 1998. | One in 12 Americans participates in the federal Food Stamp Program, administered by the Food and Nutrition Service (FNS). States have begun offering individuals alternatives to visiting the local assistance office to apply for and maintain benefits, such as mail-in procedures, call centers, and on-line services. GAO was asked to examine: (1) what alternative methods states are using to increase program access; (2) what is known about the results of these methods, particularly on program access for target groups, decision accuracy, and administrative costs; and (3) what actions states have taken to maintain program integrity while implementing alternative methods. GAO surveyed state food stamp administrators, reviewed five states in depth, analyzed FNS data and reports, and interviewed program officials and stakeholders. All states use mail and about half of states use or have begun developing on-line services and call centers to provide access to the food stamp program. Almost all states allow households to submit applications, report changes, and submit recertifications through the mail, and 26 states have implemented or are developing systems for households to perform these tasks on-line. Almost half of the states are using or developing call centers and states also are allowing households to participate in telephone interviews instead of an in-office interview. States have taken a variety of actions to help households use on-line services and call centers, such as sending informational mailings, holding community meetings, and using community partners. Insufficient information is available to determine the results of using alternative methods. Few evaluations have been conducted identifying the effect of alternative methods on program access, decision accuracy, or administrative costs. Evaluating the effectiveness of alternative methods is challenging in part because limited data are available, states are using a combination of methods, and studies can be costly to conduct. Federal and state officials reported that while they believe alternative methods can help households in several ways, such as increasing flexibility and efficiency in the application process, certain types of households may have difficulty using or accessing alternative methods. In addition, technology and staffing challenges may hinder the use of alternative methods. To maintain program integrity while implementing alternative methods, the states GAO reviewed used a variety of strategies, such as using software to verify the information households submit, communicating with other states to detect fraud, or using finger imaging. Although there has been some concern that without frequent in-person interaction with caseworkers, households may not submit required documents on time and thus be denied benefits on procedural grounds ("procedural denials"), GAO's limited analysis of FNS data found no considerable fluctuations in the rate of procedural denials in the five states between fiscal years 2000 and 2005. The states GAO reviewed have instituted several approaches to prevent procedural denials. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
DOD’s procurement process spans numerous Defense agencies and military services. This process provides for acquiring supplies and services from nonfederal sources and, when necessary, administering the related contractual instruments. It also provides for administering grants, cooperative agreements, and other transactions executed by contracting offices. The procurement process begins with the receipt of a requirement and ends at the contract closeout. (See fig. 1 for a simplified diagram of the procurement process, the interaction of this process with the logistics and financial management processes, and those functions within the procurement process that SPS is to support.) In November 1994, DOD’s Director of Defense Procurement (DDP) initiated the SPS program to acquire and deploy a single automated system to perform all contract management-related functions within DOD’s procurement process for all DOD organizations and activities. From 1994 to 1996, DOD defined SPS requirements and solicited commercially available vendor products for satisfying these requirements. DOD subsequently awarded a contract to American Management Systems (AMS), Incorporated, in April 1997, to (1) use its commercially available contract management system as the foundation for SPS, (2) modify this commercial product as necessary to meet the requirements, and (3) perform related services. DOD also directed the contractor to deliver SPS functionality in four incremental releases. The department later increased the number of releases across which this functionality would be delivered to seven; reduced the size of the increments; and allowed certain, more critical functionality to be delivered sooner. Over the last 4 years, DOD and AMS have deployed four releases to 773 locations in support of 21,900 users. The fifth release was delivered in February 2001 for acceptance testing; however, due to software deficiencies, this release was sent back to the vendor for rework and has not been deployed. AMS is expected to provide a second version of this release to DOD in July 2001 for additional testing. If accepted, the fifth release is to be deployed to about 4,500 users beginning in fiscal year 2002. DOD has not yet contracted for the sixth and seventh releases. (See table 2 for the status of the various software releases, and table 3 for the summary of SPS functionality by increment.) As planned, SPS is to be used to prepare contracts and contract-related documents and to support contracting staff in monitoring and administering them. SPS also is intended to standardize procurement business practices and data elements throughout DOD and to provide timely, accurate, and integrated contract information. Using SPS, the goal is that required contract and contract payment data will be entered once— at the source of the data’s creation—and be stored in a single database. As depicted in figure 1, SPS is to electronically interface with DOD’s logistics community, which is the source of goods and services requests, and with the Defense Finance and Accounting Service (DFAS), which is responsible for contract payments. DDP is organizationally within the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics. However, as shown in table 3, the management responsibility for SPS is shared among several organizations. Since 1996, DOD’s Office of the Inspector General (OIG) has issued three reports critical of SPS. In September 1996, the OIG reported that the needs of SPS users might not be met and that actual costs could exceed proposed costs because, among other things, the functional requirements were very broad, existing commercial software required substantial modification, and adequate development and operational test strategies had not been developed. The OIG later reported in May 1999 that SPS lacked critical functionality and concluded that the system may not meet mission needs with regard to standardizing procurement policy, processes, and procedures. The report also noted that users were receiving inadequate system training, guidance, and support, thereby forcing users to develop inefficient system workarounds. Finally, the report raised concerns about the cost- effectiveness of DOD’s contractual reliance on a single vendor to provide system support over the life of SPS, adding that an expanded license was needed to give DOD the ability to competitively compete support services. In March 2001, the OIG reported that lack of system functionality was still a serious program concern, productivity had not increased with the implementation of version 4.1, and users were generally dissatisfied with SPS. SPS program officials generally concurred with the OIG’s findings and agreed to issue guidance on the acquisition of commercial software for major automated information systems, support development of accurate life-cycle cost estimates for SPS, clarify responsibilities for the program office, the contractor, and the evaluate the cost and benefits of obtaining additional license rights and renegotiating the contract, require the program office to be aware of additional support contracts, and suggest that the component organizations provide funds to the program office to better integrate user needs, better coordinate training needs among the DOD component organizations, and require that before any future deployments of SPS, the DOD component organizations determine that the version meets their functional requirements and to identify the number of licenses required. Also, in response to the OIG’s March 2001 report, the SPS program office initiated its own study in June 2000 to assess the extent to which benefits will be realized as a result of its implementation of version 4.1 of SPS. The program office plans to publish the study results by October 2001. Federal information technology (IT) investment management requirements and guidance recognize the need to measure investment programs’ progress against commitments. In the case of SPS, DOD is not meeting key commitments and is not measuring whether it is meeting other commitments. According to the program manager, the program office is not responsible for ensuring that all program commitments are being met. Rather, the program office’s sole task is to acquire and deploy an SPS system solution that meets defined functional requirements. Given that SPS is a major Defense acquisition, the DOD CIO is the decisionmaking authority for SPS. However, according to officials in the CIO’s office, SPS has continued to be approved and funded regardless of progress against expectations on the basis of decisions made by individuals organizationally above the CIO’s office. Without measuring and reporting progress against program commitments and taking the appropriate actions to address significant deviations, DOD runs the serious risk of investing billions of dollars in a system that will not produce commensurate value. The Clinger-Cohen Act of 1996 and Office of Management and Budget (OMB) guidance emphasize the need to have investment management processes and information to help ensure that IT projects are being implemented at acceptable costs and within reasonable and expected time frames and that they are contributing to tangible, observable improvements in mission performance (i.e., that projects are meeting the cost, schedule, and performance commitments upon which their approval was justified). For programs such as SPS, DOD requires this cost, schedule, and performance information to be reported quarterly to ensure that programs do not deviate significantly from expectations. In effect, these requirements and guidance recognize that one cannot manage what one cannot measure. DOD has not met key SPS commitments concerning the timing of product delivery, user satisfaction with system performance, and the use of a commercial system solution, as discussed below: DOD committed to SPS’ being fully operational at all sites by March 31, 2000; however, this date has slipped by 3-½ years and is likely to slip further. Currently, DOD has established a September 30, 2003, milestone for making SPS fully operational, and the program manager attributed this delay to (1) problems encountered in modifying and testing the contractor’s commercial product to meet DOD’s requirements and (2) an increase in requirements. However, the SPS Joint Requirements Board chairperson stated that no additional requirements have been approved. Instead, the original requirements were clarified for the contractor to better ensure that the needs of the user would be met. However, satisfying even this revised commitment will be problematic for several reasons. First, the 2003 milestone does not recognize DOD components’ testing activities that need to occur before the system could be fully operational. For example, Department of the Air Force officials told us that they are typically 6 to 12 months behind the program office’s deployment milestones because of additional testing that the Air Force performs before it implements the software releases. Second, the 2003 milestone has not been updated to reflect the impact of events. For example, version 4.1, the latest deployed release, was recently changed from a single release to five subreleases to correct software problems discovered during operation of version 4.1; and version 4.2 recently failed acceptance testing, and the vendor is still attempting to correct identified defects. Third, the official responsible for SPS independent operational test and evaluation, as well as the official in DOD’s Office of Program Analysis and Evaluation who is responsible for reviewing the SPS economic analyses, told us that this milestone is likely to slip further. The reasons that these officials cited included incomplete system functionality, increased system complexity, and inadequate training. DOD committed to SPS’ satisfying the needs of its contracting community and meeting specified system requirements, ultimately increasing contracting efficiency and effectiveness. However, according to a recent DOD OIG report, approximately 60 percent of the user population surveyed was not satisfied with the system’s functionality and performance, resulting in the continued use of legacy systems and/or manual processes in lieu of SPS. Similarly, another DOD report describes SPS as unstable because the system frequently goes down, meaning that it is unexpectedly unavailable to users who are logged on and using the system, which, in turn, causes users to lose information. The report also notes that users complained that previously identified problems were not being resolved in later software releases, and that requested changes or enhancements were not being made. According to the program manager, at any one time, there was a backlog of 100 to 200 problems that needed to be addressed in order for SPS to meet specified requirements. In light of these challenges in meeting requirements and satisfying user needs, the official responsible for independent operational test and evaluation of SPS said that DOD should not invest in additional releases beyond version 4.2. In delivering SPS, DOD was to use a commercially available software product. However, the contractor has modified the commercial product extensively in an attempt to satisfy DOD’s needs; thus, SPS is now a DOD-unique system solution. According to the program manager, DOD knew when it selected the commercial product that the product provided only 45 percent of the functionality that DOD needed, and that extensive new software development and existing software modification were necessary. Nevertheless, the product was chosen because no commercial product was available that met DOD’s requirements, and, of the products available, DOD believed that AMS’ product and company would provide the best value. In accordance with industry best practices, software modifications to a commercial product should not exceed 10 to 15 percent. Beyond this degree of software change, experts generally consider development or acquisition of a custom system solution more cost-effective. Further, DOD guidance states that custom modifications to a commercial item, even if made and implemented by the commercial item’s vendor, result in custom system solutions. This guidance emphasizes the use of commercial items to reduce life-cycle costs and increase product reliability and availability. Since SPS is not a commercial product, DOD will not be able to take advantage of the reduced cost and risk associated with using proven technology that is used by a wide customer base. When it began the program, DOD promised that SPS would produce such benefits as (1) replacing 76 legacy systems and manual processes with a single system and thereby reducing procurement system operations and maintenance costs by an unspecified amount, (2) standardizing policies, processes, and procedures across the Department, and (3) reducing problem disbursements. However, DOD does not know the extent to which SPS is meeting each of these expectations, even though versions have been deployed to about 773 user locations. First, although DOD reports that it has retired two major legacy systems, neither the program office nor the DOD CIO office could provide us with information on what, if any, savings have been realized by doing so. Additionally, program officials told us that the number of legacy systems and manual processes that SPS is to replace is now significantly less than the 76 originally used to justify the program. In response to our inquiry, the SPS program manager recently surveyed the DOD component organizations to determine the number of legacy systems. According to the results of the survey, there were 55 legacy procurement systems. See table 4 for the status of these systems as of June 2001. According to the SPS program manager, 45 of the 55 systems remain, and 10 to 12 of these systems are to be replaced by SPS. However, another program official noted that SPS was always intended to replace only 14 major legacy systems. In either case, the latest economic analysis has not been updated to reflect this change in the number of systems to be replaced, and the associated cost savings are not known. Second, the standardization of policies, processes, and procedures benefit is not materializing because each military service is either in the process of developing, or has plans to develop, its own unique policies, processes, and procedures. Third, program officials were unable to provide evidence that implementing SPS has reduced problem disbursements or achieved the benefits outlined in the economic analysis. In fact, the latest economic analysis no longer even cites reducing problem disbursements as a benefit because the DOD components’ position was that SPS would not completely address this problem. According to the program manager and CIO officials, there is no DOD policy that requires them to assess whether the expected benefits are in fact being realized. When the SPS program began, DOD also committed to a system life-cycle cost of about $3 billion over a 10-year period. However, total actual program costs are not being accumulated and monitored against estimates, which in 2000 were revised to about $3.7 billion (a 28-percent increase). Thus, DOD does not know what has been spent on the program by all DOD component organizations. To date, the only actual program costs being collected and reported are those incurred by the SPS program office, which DOD reports to be about $322 million through September 30, 2000. To determine the total cost of the SPS program through September 30, 2000, we requested cost information from 18 Defense agencies and the four military services. These DOD components reported that they have collectively spent approximately $125 million through September 30, 2000. However, these reported costs are not complete because (1) 4 of the 22 DOD components did not respond, (2) components reported that SPS costs were being captured with other programs and could not be allocated accurately, and (3) all SPS costs, such as employee salaries and system infrastructure costs, were not included. According to program officials, no single DOD organization is responsible for accumulating the full DOD cost of SPS. Without knowing the extent to which SPS is meeting cost-and-benefit expectations, DOD is not in a position to make informed, and thus justified, decisions on whether and how to proceed further on the program. Such a situation introduces a serious risk of investing in a system that will not produce a positive net present value (i.e., estimated benefits to be realized would exceed estimated program costs). Federal IT investment management requirements and guidance, as well as DOD policy, recognize the need to economically justify IT projects before investing in them and to justify them in an incremental manner in an effort to spread the risk of doing many things over many years on large projects across smaller, more manageable subprojects. However, the department has not economically justified investing in SPS because its own analysis shows that expected life-cycle benefits are less than estimated life-cycle costs. Moreover, DOD is not approaching its investment in SPS on an incremental basis. Nevertheless, DOD continues to invest hundreds of millions of dollars in SPS each year, running the serious risk of spending large sums of money on a system that does not produce commensurate value. According to program and CIO officials, DOD continues to invest these funds because individuals above the CIO’s office decided that SPS was a departmental priority. The Clinger-Cohen Act of 1996 and OMB guidance provide an effective framework for IT investment management. Together, they set requirements for (1) economically justifying proposed projects on the basis of reliable analyses of expected life-cycle costs, benefits, and risks, (2) using these analyses throughout a project’s life cycle as the basis for investment selection, control, and evaluation decisionmaking, and (3) doing so for large projects (to the maximum extent practical) by dividing them into a series of smaller, incremental subprojects or releases. By doing so, the tremendous risk associated with investing large sums of money over many years in anticipation of delivering capabilities and expected business value far into the future can be spread across project parts that are smaller, of a shorter duration, and capable of being more reliably justified and more effectively measured against cost, schedule, capability, and benefit expectations. DOD policy also reflects these investment principles by requiring that investments be justified by an economic analysis and, more recently, that investment decisions for major programs, like SPS, be made incrementally by ensuring that each incremental part of the program delivers measurable benefit, independent of future increments. According to the policy, the economic analysis is to reflect both life-cycle cost and benefits estimates, including a return-on-investment calculation, to demonstrate that a proposal to invest in a new system is economically justified before that investment is made. DOD has developed three economic analyses for SPS—one in 1995 and two updates (one in 1997 and another in 2000). While the initial analysis reflected a positive net present value, the two updates did not. Specifically, the 1997 analysis estimated life-cycle costs and benefits to be $2.9 billion and $1.8 billion, respectively, which is a recovery of only 62 percent of costs; the 2000 analysis showed even greater costs ($3.7 billion) and fewer benefits ($1.4 billion), which is a recovery of only 37 percent of costs (see fig. 2). Nevertheless, these data were not reflected in the return-on-investment calculation in the analyses that were used as the basis for approving SPS. Instead, this return-on-investment calculation (1) included only those costs estimated to be incurred by the program office and (2) excluded the SPS implementation and operation and maintenance costs of DOD agencies and military services. According to program officials, the latter costs were excluded because either they would have been incurred anyway or the program office did not require them. For example, the officials stated that the DOD agencies and military services routinely upgrade their IT infrastructures to support existing systems; therefore, they assumed that the agencies and services would have purchased new infrastructures even if SPS had not been acquired. Also, program officials did not believe that training paid for by DOD agencies and military services should be included as a cost element because this is an elective expense (i.e., the program management office does not require this additional training). However, some DOD component officials told us that some of their infrastructure and other costs were being incurred solely to support implementation of SPS. Using DOD’s estimates, we calculated SPS’ net present value for fiscal years 1997 and 2000 to be about negative $174 million and negative $655 million, respectively. DOD’s Office of Program Analysis and Evaluation is responsible for, among other things, verifying and validating the reliability of economic analyses for major programs, such as SPS, and providing its results to the program approval authority, which in this case is the DOD CIO. According to Office of Program Analysis and Evaluation officials, although the economic analyses were reviewed, there are no written results of these reviews. These officials stated, however, that they orally communicated concerns about the analyses to program officials and to DOD CIO officials responsible for program oversight and control. They also stated that while they could not recall specific issues discussed, they concluded that the economic analyses provided a reasonable basis for decisionmaking. To be useful for informed investment decisionmaking, analyses of project costs, benefits, and risks must be based on reliable estimates. However, most of the cost estimates in the latest economic analysis are estimates carried forward from the 1997 economic analysis (adjusted for inflation). Only the costs being funded and managed by the SPS program office, which are 13 percent of the total life-cycle cost in the analysis, were updated in 2000 to reflect more current contract estimates and actual expenditures/obligations for fiscal years 1995 through 1999. The costs to be funded and incurred by DOD agencies and the military services were not updated to account for all program changes or to incorporate better information. In its review of the 2000 economic analysis, the Naval Center for Cost Analysis also noted that the DOD agencies and the military services’ cost information, which accounted for the majority of the program’s overall costs, had not been updated. In fact, only two cost elements were updated for the DOD component organizations in the 2000 economic analysis, and the estimates for these cost elements were based on estimates derived for just one service (the Air Force), and then extrapolated to all other DOD components. According to Departments of the Army, Navy, and Air Force component representatives, these original estimates of costs, as well as benefits, were highly questionable at best. However, this uncertainty was not reflected in the economic analysis by any type of sensitivity analysis (i.e., an analysis to explicitly present the return-on-investment implications associated with using estimates whose inherent imprecision could produce a range of outcomes). Such sensitivity analysis would disclose for decisionmakers the investment risk being assumed by relying on the calculations presented in the economic analysis. According to the SPS program manager, costs in the 2000 economic analysis were not updated because information for the DOD components was not readily available for inclusion. Additionally, updating DOD component costs was not viewed as relevant because the return-on- investment calculation cited in the latest economic analysis did not include these costs, and the updated analysis was done after DOD leadership had decided to increase funding and continue the program. However, by not using economic analyses that are based on reliable cost estimates, DOD is making uninformed, and thus potentially unwise, multimillion-dollar investment decisions. According to OMB guidance, analyses of investment costs, benefits, and risks should be (1) updated throughout a project’s life cycle to reflect material changes in project scopes and estimates and (2) used as a basis for ongoing investment selection and control decisions. To do less, risks continued investment in projects on the basis of outdated and invalid economic justification. The latest economic analysis (January 2000) is outdated because it does not reflect SPS’ current status and known risks associated with program changes. For instance, this analysis is based on a program scope and associated costs and benefits that anticipated four software releases, each providing more advanced features and functions. However, according to the program manager, SPS now consists of seven releases over which additional requirements are to be delivered. Estimates of the full costs, benefits, and risks relating to these additional three releases are not part of this latest economic analysis. Also, the 2000 economic analysis does not fully recognize actual and expected delays in meeting SPS’ full operational capability milestone. That is, the 2000 economic analysis assumed that this milestone would be September 30, 2003. However, as previously mentioned, this milestone date is unlikely to be met for a variety of reasons, such as user dissatisfaction with current system capabilities. According to the SPS program manager, the latest economic analysis has not been updated to reflect changes because the analysis is not used for managing the program and because there is no DOD requirement for updating an economic analysis when changes to the program occur. By not ensuring that the program is being proactively managed on the basis of current information about costs, benefits, and risks, DOD is unnecessarily assuming an excessive amount of investment risk. As we have previously reported, incremental investment management involves three fundamental components: (1) developing/acquiring a large system in a series of smaller projects or system increments, (2) individually justifying investment in each separate increment on the basis of costs, benefits, and risks, and (3) monitoring actual benefits achieved and costs incurred on completed increments and modifying subsequent increments or investments to reflect lessons learned. While DOD is acquiring and implementing SPS in a series of incremental releases (originally four and now seven), it is not making decisions about whether to invest in each release on the basis of the release’s costs, benefits, and risks, and it is not measuring whether it is meeting cost-and- benefit expectations for each release that is implemented. Instead, DOD is treating investment in SPS as one, monolithic investment decision, justified by a single, all-or-nothing economic analysis. Moreover, DOD has not measured whether the incremental software releases have produced expected business value, even though its economic analysis aligns expected benefits with the then four incremental releases. In June 2000, the SPS program office initiated a study in an attempt to validate the extent to which benefits would be realized as a result of DOD’s implementation of version 4.1 of the software. However, our review of the methodology and preliminary results revealed that the study was poorly planned and executed and that, while useful information may be compiled, DOD would be unable to use the study’s results to validate the accrual of benefits. As a result, DOD will have spent hundreds of millions of dollars on the entire system before knowing whether it is producing value commensurate with cost. The program manager told us that knowing whether SPS is producing such value is not the program office’s objective. Rather, its objective is to simply acquire and deploy the system. Similarly, DOD CIO officials told us that although the economic analysis promised a business value that would exceed costs, DOD is not validating that implemented releases are producing that value because there is no DOD requirement and no metrics defined for doing so. By not investing incrementally in SPS, DOD runs the serious risk of discovering too late (i.e., after it has invested hundreds of millions of dollars) that SPS is not cost-beneficial. DOD’s management of SPS is a lesson in how not to justify, make, and monitor the implementation of IT investment decisions. Specifically, DOD has not (1) ensured that accountability and responsibility for measuring progress against commitments are clearly understood, performed, and reported, (2) demonstrated, on the basis of reliable data and credible analysis, that the proposed system solution will produce economic benefits commensurate with costs before investing in it, (3) used data on progress against project cost, schedule, and performance commitments throughout a project’s life cycle to make investment decisions, and (4) divided this large project into a series of incremental investment decisions to spread the risks over smaller, more manageable components. Currently, DOD is not effectively performing any of these basic tenets of effective investment management on SPS, and, as a result, DOD lacks the basic information needed to make informed decisions about how to proceed with the project. Nevertheless, DOD continues to push forward in acquiring and deploying additional versions of SPS. Continuing with this approach to investment management introduces considerable risk. As a result, beyond possibly operating and maintaining already implemented releases for the remainder of fiscal year 2001 and meeting already executed contractual commitments, further investment in SPS has not been justified. We recommend that the Secretary of Defense direct the Assistant Secretary of Defense for Command, Control, Communications, and Intelligence, as the designated approval authority for SPS, to clarify organizational accountability and responsibility for measuring SPS progress against commitments and to ensure that these responsibilities are met. We further recommend that the Secretary direct the Assistant Secretary to make investment in each new release, or each enhancement to an existing release, conditional upon (1) validating that already implemented releases of the system are producing benefits that exceed costs and (2) demonstrating on the basis of credible analysis and data that (a) proposed new releases or enhancements to existing releases will produce benefits that exceed costs and (b) operation and maintenance of already deployed releases of SPS will produce benefits that exceed costs. Also, we recommend that the Secretary direct the Director, Program Analysis and Evaluation, to validate any analysis produced to justify further investment in SPS and to report any validation results to the Assistant Secretary of Defense for C3I. We also recommend that no further decisions to invest in SPS be made without these validation results. Additionally, we recommend that the Secretary direct the Assistant Secretary of Defense for C3I to take the necessary actions, in collaboration with the SPS program manager, to immediately determine the current state of progress against program commitments addressed in this report and to ensure that such information is used in all future investment decisions concerning SPS. Last, we recommend that the Secretary direct the Assistant Secretary of Defense for C3I to report by October 31, 2001, to the Secretary and to DOD’s relevant congressional committees on lessons learned from the SPS investment management experience, including what actions will be taken to prevent a recurrence of this experience on other system acquisition programs. In written comments on a draft of this report (reprinted in appendix II), the Acting Deputy Assistant Secretary of Defense for Command, Control, Communications, and Intelligence, who is also the DOD Deputy Chief Information Officer (CIO), agreed and partially agreed with our recommendations. In particular, the Deputy CIO agreed with our recommendation regarding the need to clarify organizational accountability and responsibility for measuring the program’s progress and ensuring that these responsibilities are met. The Deputy CIO also agreed to document lessons learned and have the Director of Program Analysis and Evaluation validate the results of any ongoing and future analyses of SPS’ return on investment. However, the Deputy CIO disagreed with our report’s overall finding that continued investment in SPS has not been justified, and disagreed with those elements of our recommendations that could delay development and deployment of SPS, specifically, acquiring and using the information we believe is needed to make informed investment decisions. To support its position, however, the Deputy CIO offered no new facts or analyses. Instead, the comments either cite information already in our report or claims that the demands of incremental investment management are “inefficient, costly, and overly intrusive” and will cause “unwarranted delays and disruption to the program” for no other reason than “to satisfy economists and accountants.” According to DOD’s comments, the latest SPS economic analysis and the existing efforts to measure progress against selected program commitments provide sufficient bases for continuing to invest hundreds of millions of dollars in SPS. In particular, DOD stated that it is making progress in improving its ability to standardize contracting for goods and services, adding that this standardization progress is not only saving operating costs by retiring legacy procurement systems, but is also providing a standard environment within DOD for the exchange of information and a consistent look and feel of contract information to companies doing business with the department. In light of these outcomes, DOD commented that one of its main goals under the program is the timely fielding of SPS capability. We disagree with these comments. As we describe in the report, incremental investment management practices are not only a best practice, but are also required by the Clinger-Cohen Act of 1996 and specified in OMB guidance and recently revised DOD acquisition policy. Therefore, DOD’s comments regarding incremental investment in SPS are at odds with contemporary practices and operative federal requirements and guidance. Additionally, the economic analysis that DOD’s comments refer to is not reliable for a number of reasons that are discussed in our report. Specifically, this analysis treats SPS as a single, monolithic system investment. Experience has shown that such an all-or-nothing economic justification is too imprecise to use in making informed decisions on large investments that span many years. This kind of approach for justifying investment decisions has historically resulted in agencies’ investing huge sums of money in systems that do not provide commensurate benefits, and thus has been abandoned by successful organizations. Further, the need to avoid this pitfall was a major impetus for the Clinger-Cohen Act investment management reforms. Also, as discussed in our report, the analysis highlights a return-on- investment calculation in its summary that does not include all relevant costs, such as the costs to be incurred by DOD components. Instead, the summary uses only SPS program office costs in this return-on-investment calculation. Further, this return-on-investment calculation does not reflect known changes in the program’s scope and schedule that would increase costs and reduce benefits. As our report points out, it does not, for example, reflect SPS’ change from four software releases to seven releases nor does it reflect the improbability of meeting a September 30, 2003, full operational capability date. DOD’s comments also promote continued spending on SPS without sufficient awareness of progress against meaningful commitments, such as reliable data measuring and validating whether return-on-investment projections are being met. In fact, DOD’s comments emphasize standardization and fast deployment as core commitments. However, neither factor is an end in and of itself. Unless SPS provides the capability to perform procurement and contracting functions better and/or cheaper, and does so to a degree that makes SPS a more attractive investment relative to the department’s other investment options, DOD will not have adequate justification for investing further in SPS. As our report demonstrates, the department presently does not have the information it needs to know whether this investment is justified, and the information that is available raises serious questions about SPS’ acceptance by its user community and its business value. Nevertheless, DOD’s comments indicate its intention to implement SPS as planned. Our recommendations are aimed at ensuring that the department obtains the information it needs to make informed SPS investment decisions before proceeding with additional acquisitions. DOD provided other clarifying comments that have been incorporated as appropriate throughout this report. The written comments, along with our responses, are reproduced in appendix II. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the date of this letter. At that time, we will send copies to the Chairmen and Ranking Minority Members of the Senate Committee on Armed Services; Senate Appropriations Subcommittee on Defense; House Armed Services Committee; House Appropriations Subcommittee on Defense; Subcommittee on Government Efficiency, Financial Management, and Intergovernmental Relations, House Committee on Government Reform; and Subcommittee on National Security, Veterans Affairs, and International Relations, House Committee on Government Reform. We are also sending copies of this report to the Director, Office of Management and Budget; the Secretary of Defense; the Acting Secretary of the Army; the Acting Secretary of the Navy; the Acting Secretary of the Air Force; the Acting Assistant Secretary of Defense Command, Control, Communications, and Intelligence/Chief Information Officer; the Under Secretary of Defense for Acquisition, Technology and Logistics; the Principal Deputy and Deputy Under Secretary for Management Reform; the Acting Director of Operational Testing and Evaluation; the Director of Program Analysis and Evaluation; the Director of Defense Procurement; the Director of the Defense Contract Management Agency; and the Director of the Defense Logistics Agency. If you have any questions on matters discussed in this report, please call me at (202) 512-3439 or Cynthia Jackson, Assistant Director, at (202) 512-5086. We can also be reached by e-mail at [email protected] and [email protected], respectively. Key contributors to this assignment are listed in appendix III. Our objectives were to determine the progress that the Department of Defense (DOD) has made against the Standard Procurement System (SPS) program commitments and whether DOD has economically justified further investment in SPS. To determine the progress made, we first analyzed relevant legislative and Office of Management and Budget (OMB) requirements, associated federal guidance, and applicable DOD policy and guidance on investment management. We then analyzed relevant program management documents and interviewed program officials to identify estimates and expectations for SPS’ cost, schedule, and performance, including the system capabilities to be provided and benefits to be produced by these capabilities. Source documents for this information included, but were not limited to, the acquisition strategy and program baseline, acquisition decision memorandums, and the quarterly Defense Acquisition Executive Summary report. We then reviewed program management reports and briefings, interviewed program officials, and solicited information from the various DOD component organizations participating in SPS’ implementation to determine reported cost, schedule, and performance status. We compared this information against estimates and expectations to identify any variances. We did not independently validate the status information that we obtained. In cases where variances were found or status information was not available, we questioned program management and DOD’s Office of the Chief Information Officer (CIO) oversight officials. The DOD organizations that were part of our scope of contacts included the SPS program office within the Defense Contract Management Agency; the Office of the Director of Investments and Acquisition within the Office of the Assistant Secretary of Defense for Command, Control, Communications, and Intelligence (C3I)/Chief Information Officer; the Office of the Deputy Director (Strategic and Space Programs) within the Office of Program Analysis and Evaluation under the Office of the Undersecretary of Defense (Comptroller/Chief Financial Officer); the Office of Strategic and C3I Systems within the Office of the Director of Operational Test and Evaluation; and various offices within the Defense agencies and military services responsible for implementing SPS. To determine whether DOD had economically justified SPS, we reviewed relevant legislative requirements and associated OMB guidance, as well as DOD policy and guidance on preparing and using economic analyses (cost, benefit, and risk), to measure progress against information technology (IT) investment decisions and to do so using an incremental or modular approach. We then obtained the original economic analyses prepared for the program and the two subsequent updates and evaluated them in light of relevant requirements, policies, and guidance to identify strengths and weaknesses. We also reviewed program management documents and interviewed program and oversight officials to understand how these analyses were reviewed and used, and we compared the results to relevant requirements and guidance. We also calculated the program’s net present value using the 1997 and 2000 economic analyses. In addition, we interviewed officials from the SPS program office, DOD CIO’s office, and DOD’s Program Analysis and Evaluation Office to discuss our results and seek clarifying information. We reviewed the methodology and preliminary results for the productivity study being conducted by DOD to substantiate the benefits to be realized by implementing SPS. We also interviewed officials from the SPS program office, Vector Research Incorporated, and Logistics Management Institute to discuss the methodology (e.g., survey execution, sampling, and analysis plans) and our conclusions on the study. We conducted our work at DOD headquarters offices in Washington, D.C., and Alexandria, Virginia, and at American Management Systems, Incorporated, headquarters in Fairfax, Virginia, from October 2000 through June 2001 in accordance with generally accepted government auditing standards. 1. See comments 2 through 9. 2. We did not independently validate DOD-reported data on the number of sites and procurement personnel who have received SPS training, the number of personnel who are located at sites where some version of SPS has been deployed, or the number and dollar value of contract actions completed in fiscal year 2000 using SPS; thus we have no basis to comment on the accuracy of these data. However, we do not agree with this comment’s thrust that these data points, combined with statements about DOD’s “improving its ability to standardize,” “providing a standard environment,” and providing “a consistent look and feel,” are sufficient measures of progress against commitments. As the Clinger-Cohen Act and OMB guidance emphasize, and as we state in our report, investments in information technology need to contribute tangible, observable improvements in mission performance. Thus, standardization should not be viewed as an end in and of itself, but rather the means to an end, such as increased productivity and reduced costs. DOD’s comment on progress does not address such tangible, observable benefits. Instead, DOD states that SPS is saving operating costs by retiring legacy procurement systems, which, when SPS was initiated and justified, were to total 76 systems. However, as we also state in the report, only two legacy systems have been retired thus far as a result of the system’s being deployed to 773 sites, and DOD could not provide what, if any, savings were being realized by doing so. Moreover, the number of legacy systems that DOD eventually expects to be replaced by SPS has decreased to between 12 and 14. Further, while DOD states that SPS is providing standardization of contracting for goods and services for a segment of its procurement community, our report points out that each service is either in the process of developing, or has plans to develop, its own unique procurement policies, processes, and procedures. 3. For the reasons discussed in our report, we do not agree that DOD has justified further SPS investment in its 2000 economic analysis. For example, only the SPS costs funded and managed by the SPS program office, which are 13 percent of the total life-cycle cost in the analysis, were updated in 2000. The costs to be funded and incurred by DOD agencies and the military services were not updated to account for all program changes or to incorporate better information. Exacerbating this is the fact that only two cost elements were updated for the DOD component organizations in the 2000 economic analysis, and the estimates for these cost elements were based on estimates derived for just one service and extrapolated to all other DOD components. As another example, the analysis does not reflect the reduced number of legacy systems to be retired as well as recent evidence of user non- acceptance and non-use of the system, both of which drive benefit accrual. We also do not agree that the analysis documented that a $163 million additional investment by the SPS program office would result in additional benefits of $389.5 million (in net present value terms). Rather, the analysis shows that acquiring, operating, and maintaining SPS over its life cycle will cost about $17 million more, but will produce about $390 million more in benefits (in net present value terms) than operating and maintaining legacy procurement systems. However, the analysis also shows that SPS as planned is not a cost- beneficial investment, because estimated costs exceed expected program benefits. 4. We do not disagree with DOD’s comments regarding the major program designation of SPS and the many organizations involved in the program. Also, while we agree that SPS program officials prepared the Acquisition Program Baseline and have reported quarterly against the commitments that are contained in this baseline, the baseline commitments and the associated reporting do not extend to all the relevant program goals and objectives that we cite in the report as needing to be measured in order to effectively manage a program like SPS, such as what the system is actually costing DOD and whether promised business value is actually being realized. Additionally, the Acquisition Program Baseline is dated May 4, 1998, and thus the commitments in this baseline are out of date. We do not agree with DOD’s comments characterizing the timing of the 2000 economic analysis update. As we state in our report, this update was prepared after the increase in SPS funding had been approved. In fact, the Program Analysis and Evaluation official responsible for reviewing the analysis stated that it was for this reason that the review was perfunctory at best. 5. We do not agree with DOD’s comment that delaying investment in new SPS releases or enhancements until DOD validates that already implemented releases of the system are producing benefits in excess of costs is contrary to best practice and would delay and disrupt SPS in a way that is not warranted. As we state in our report, available evidence raises serious questions about the cost and benefit implications of users’ limited acceptance of already deployed versions as well as the cost implications of DOD’s limiting its maintenance options to a single vendor. Our point is that answers to these questions are needed in order to make informed investment decisions, and to proceed as planned with new investments without this information risks continuing to invest in the wrong system solution faster. We agree with the comment that the program office initiated a productivity study in the summer of 2000. As we state in our report, this study was undertaken in response to DOD Inspector General findings that raised questions about user acceptance of the system. However, we do not agree that this study will substantiate the SPS benefit estimates and quantitatively document the benefits of SPS implementation through 2000 because the study’s scope and methodology are limited. For example: According to the program official responsible for the study, the purpose of the study is to estimate expected benefits to be realized in fiscal year 2003, from implementation of version 4.1. The sample selected was not statistically valid, meaning that the results are not projectable to the population as a whole. Relative to the other services, the Air Force was not proportionally represented in the study, meaning that any results would not necessarily be reflective of Air Force sites. The study was based on the 1997 economic analysis instead of the more current 2000 economic analysis despite key differences between the two analyses. For example, the 1997 analysis shows 22 benefits valued at approximately $1.8 billion over the program’s 10-year life cycle, while the 2000 analysis contains only 19 benefits valued at approximately $1.4 billion. According to SPS program officials, the survey instrument was not rigorously pre-tested. Such pre-testing is important because it ensures that the survey (1) actually communicates what it was intended to communicate, (2) is standardized and will be uniformly interpreted by the target population, and (3) will be free of design flaws that could lead to inaccurate answers. The information being gathered does not map to the 22 benefit types listed in the 1997 SPS economic analysis. Instead, the study is collecting subjective judgments that are not based on predefined performance metrics for SPS capabilities and impacts. Thus, DOD is not measuring SPS against the benefits that it promised SPS would provide. In addition, the senior official responsible for SPS implementation in the Air Force stated that the Air Force plans to conduct its own, separate survey to determine whether the system is delivering business value, indicating component uneasiness about the reliability of the SPS program office’s study. 6. We disagree. As we state in the report, incremental investment management practices are not only a best practice, but are also required by the Clinger-Cohen Act of 1996 and specified in OMB guidance and recently revised DOD acquisition policy. Therefore, DOD’s comments regarding incremental investment in SPS are at odds with contemporary practices and operative federal requirements and guidance. Additionally, the economic analysis that DOD’s comments refer to is not reliable for a number of reasons that are discussed in our report. Specifically, this analysis treats SPS as a single, monolithic system investment. Experience has shown that such an all-or-nothing economic justification is too imprecise to use in making informed decisions on large investments that span many years. This kind of approach to justifying investment decisions has historically resulted in agencies investing huge sums of money in systems that do not provide commensurate benefits, and thus has been abandoned by successful organizations. Further, the need to avoid this pitfall was a major impetus for the Clinger-Cohen Act investment management reforms. DOD’s comments also promote continued spending on SPS without sufficient awareness of progress against meaningful commitments, such as reliable data measuring and validating that return-on- investment projections are being met. In lieu of such measures, DOD’s comments emphasize standardization and fast deployment as core commitments. However, neither of these is an end in and of itself. Unless SPS provides DOD with the capability to perform procurement and contracting functions better and/or cheaper, and does so to a degree that makes SPS a more attractive investment relative to the department’s other investment options, DOD is not justified in investing further in SPS. As our report demonstrates, and as discussed in comments 2 and 3 above, DOD presently does not have the kind of reliable information it needs to know whether this investment is justified, and the information that is available raises serious questions about SPS’ acceptance by its user community and its business value. With regard to the timely fielding of SPS, we note in our report that the program has already been delayed 3-1/2 years. In fact, delivery of version 4.1 of the software was 22 months overdue, and version 4.2 is already 5 months behind. While the impact of schedule delays and cost increases is a valid concern on any project, these factors are not the sole criteria. Introducing the wrong system solution faster and cheaper is still introducing the wrong solution no matter how it is presented. It is thus critically important that investment decisions be based on an integrated understanding of cost, benefit, and risk. 7. We do not dispute that the cited events have occurred, although we would add for additional context that we met with Assistant Secretary of Defense for Command, Control, Communications, and Intelligence (C3I) officials on March 15, 2001, the day before the memorandum requesting the first program review, to share our concerns and seek clarification, and that we provided our draft report to DOD for comment on May 25, 2001. We do not agree with DOD’s comment that it is not necessary to have the Secretary of Defense direct the Assistant Secretary of Defense for C3I (DOD CIO) to determine the current state of SPS progress against commitments and to ensure that this information is used in future investiment decisions for several reasons. First, the recent reviews cited in the DOD comments were for the Defense Contract Management Agency, which is the Component Acquisition Executive, and the Office of the Director of Defense Procurement, which is the SPS functional sponsor. Neither of these entities is the DOD CIO, who is the designated decision authority for SPS milestones and thus under SPS’ management structure has ultimate accountability for SPS. Second, the recent reviews cited in DOD’s comments did not satisfy our recommendation for determining the current state of progress against the SPS commitments described in our report. In fact, we attended the April 27, 2001, review meeting, during which the senior attending official from the Defense Contract Management Agency stated that information being provided at this meeting was insufficient from a program management standpoint, lacking key information needed for informed SPS decision-making. Third, the March 16, 2001, memorandum cited in DOD’s comments acknowledges the need to update SPS’ economic justification in light of the program’s cost and schedule changes and to ensure compliance with Clinger-Cohen Act requirements. Fourth, the SPS program manager’s planned actions to respond to recent reviews are not sufficient to address the uncertainties surrounding SPS. According to the program manager, the acquisition program baseline would be updated to reflect the most recent program costs and expected schedule for full operational capability, but the program office had not planned any other actions. Last, DOD’s comment stating that the Office of the DOD CIO and the Office of the Director of Defense Procurement plan to conduct an independent review of SPS within the next 180 days does not satisfy our recommendation because (1) DOD’s schedule for SPS calls for issuing contract task orders for subsequent SPS releases during this 6-month period and (2) this commitment is only a vague statement to “plan to conduct” a review at some undetermined, potentially distant, future point in time rather than having a review scheduled to occur in time to effect meaningful investment management improvements. In light of DOD’s comments regarding this recommendation and for the reasons discussed above, we have modified our recommendation to specify that the recommended determination of the state of progress should occur immediately and should address each of the program commitments discussed in this report. 8. We acknowledge DOD’s agreement with the recommendation, but note that neither our recommendation nor DOD’s comment specifies when this report would be prepared. Accordingly, we have modified our recommendation to include a timeframe for reporting to the Secretary of Defense and relevant congressional committees on lessons learned and actions to prevent recurrence of those SPS experiences on other system acquisition programs. Additionally, we disagree with DOD’s comments about the findings and conclusions in our report. In our view, the totality of evidence presented in our report, along with the results of prior Defense Inspector General reviews, supports our conclusion that SPS is a lesson in how not to justify, make, and measure implementation of investment decisions. Also, as addressed in comments 2 and 3, we do not agree with DOD’s point that SPS has been justified by the 1997 and 2000 economic analyses. Last, we do not agree with DOD’s comments that we incorrectly calculated a negative return on investment for SPS and that our methodology for calculating net present value is incorrect. To calculate net present value, we used current OMB guidance, which requires that relevant life-cycle cost estimates be used. Additionally, we used DOD’s own life-cycle cost estimates from its economic analyses. While we acknowledge that SPS officials told us that these life-cycle cost estimates included the costs of operating legacy procurement systems, we also requested that these officials identify what these legacy system costs are so that we could back them out. However, SPS officials told us that they did not know the amount of these costs. As a result, our calculation is based on the best information that the SPS program office had available and could provide. 9. See comments 3 and 8. Also, we agree that applying our net-present- value calculation methodology to the SPS and status quo cost-and- benefit data provided in the January 2000 economic analysis show that SPS is cheaper than the status quo option. However, this calculation also shows that SPS as planned is not cost beneficial. Also, DOD’s comments compare only a small portion of SPS life-cycle costs (program office investment costs) against the difference between expected benefits under the SPS scenario and the status quo scenario. This comparison is illogical because it assumes that an arbitrary part of relevant investment costs can be associated with the total benefit difference between alternatives. Accordingly, we do not agree with DOD’s comment. While Appendix E of the January 2000 economic analysis contained some of the information provided in the tables contained in DOD’s comments, it did not provide a net present value calculation. Further, the Appendix E tables were not included in the economic analysis’ executive summary. Instead, the summary provided a benefits-to-costs ratio that excluded certain relevant costs. In addition to the person named above, Nabajyoti Barkakati, Harold J. Brumm, Jr., Sharon O. Byrd, James M. Fields, Sophia Harrison, James C. Houtz, Richard B. Hung, Barbarol J. James, and Catherine H. Schweitzer made key contributions to this report. Standard Procurement System Use and User Satisfaction, Office of the Inspector General, Department of Defense (Report No. D-2001-075, March 13, 2001). Defense Management: Actions Needed to Sustain Reform Initiatives and Achieve Greater Results (GAO/NSIAD-00-72, July 25, 2000). Department of Defense: Implications of Financial Management Issues (GAO/T-AIMD/NSIAD-00-264, July 20, 2000). Defense Management: Electronic Commerce Implementation Strategy Can Be Improved (GAO/NSIAD-00-108, July 18, 2000). Initial Implementation of the Standard Procurement System, Office of the Inspector General, Department of Defense (Report No. 99-166, May 26, 1999). Financial Management: Seven DOD Initiatives That Affect the Contract Payment Process (GAO/AIMD-98-40, January 30, 1998). Allegations to the Defense Hotline Concerning the Standard Procurement System, Office of the Inspector General, Department of Defense (Report No. 96-219, September 5, 1996). | This report reviews the Department of Defense's (DOD) ability to contract for goods and services by acquiring and implementing a standard procurement system (SPS). DOD's management of SPS is a lesson in how not to justify, make, and monitor the implementation of information technology investment decisions. Specifically, DOD has not (1) ensured that accountability and responsibility for measuring progress against commitments are clearly understood, performed, and reported; (2) demonstrated, on the basis of reliable data and credible analysis, that the proposed system solution will produce economic benefits commensurate with costs; (3) used data on progress against project cost, schedule, and performance commitments throughout a project's life cycle to make investment decisions; and (4) divided this large project into a series of incremental investment decisions to spread the risks over smaller, more manageable components. Because it has yet to effectively apply any of these basic tenets of effective investment management to SPS, DOD lacks the basic information needed to make informed decisions on how to proceed with the project. Nevertheless, DOD continues to push forward in acquiring and deploying additional versions of SPS. Continuing this approach involves considerable risk. GAO summarized this report in testimony before Congress; see DOD's Standard Procurement System: Continued Investment Has Yet to Be Justified, by Joel C. Willemssen, Managing Director for Information Technology Issues, before the Subcommittee on National Security, Veterans Affairs, and International Relations, House Committee on Government Reform. GAO-02-392T , Feb. 3 (13 pages). |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The United States has assisted the Mexican government in its counternarcotics efforts since 1973, providing about $350 million in aid. Since the late 1980s, U.S. assistance has centered on developing and supporting Mexican law enforcement efforts to stop the flow of cocaine from Colombia, the world's largest supplier, into Mexico and onward to the United States. According to U.S. estimates, Mexican narcotics-trafficking organizations facilitate the movement of between 50 and 60 percent of the almost 300 metric tons of cocaine consumed in the United States annually. In the early 1990s, the predominant means of moving cocaine from Colombia to Mexico was by aircraft. However, a shift to the maritime movement of drugs has occurred over the past few years. In 1998, only two flights were identified as carrying cocaine into Mexico. According to U.S. law enforcement officials, most drugs enter Mexico via ship or small boat through the Yucatan peninsula and Baja California regions. Additionally, there has been an increase in the overland movement of drugs into Mexico, primarily through Guatemala. Since 1996, most U.S. assistance has been provided by the Department of Defense to the Mexican military, which has been given a much larger counternarcotics and law enforcement role. On the other hand, the Department of State’s counternarcotics assistance program has been concentrating on supporting the development of specialized law enforcement units, encouraging institutional development and modernizing and strengthening training programs. Table 1 provides additional information on U.S. counternarcotics assistance to the government of Mexico since 1997. The Foreign Assistance Act of 1961, as amended, requires the President to certify annually that major drug-producing and -transit countries are fully cooperating with the United States in their counternarcotics efforts. As part of this process, the United States established specific objectives for evaluating the performance of these countries. According to State Department officials, as part of the March 1999 certification decision, the United States will essentially use the same objectives it used for evaluating Mexico's counternarcotics cooperation in March 1998. These include (1) reducing the flow of drugs into the United States, (2) disrupting and dismantling narcotrafficking organizations, (3) bringing fugitives to justice, (4) making progress in criminal justice and anticorruption reform, (5) improving money-laundering and chemical diversion control, and (6) continuing improvement in cooperation with the United States. Although there have been some difficulties, the United States and Mexico have undertaken some steps to enhance cooperation in combating illegal drug activities. Mexico has also taken actions to enhance its counternarcotics efforts and improve law enforcement capabilities. There have been some positive results from the new initiatives, such as the arrest of two major drug traffickers and the implementation of the currency and suspicious transaction reporting requirements. Overall, the results show: drugs are still flowing across the border at about the same rate as 1997, there have been no significant increases in drug eradication and no major drug trafficker has been extradited to the United States, money-laundering prosecutions and convictions have been minimal, corruption remains a major impediment to Mexican counternarcotics most drug trafficking leaders continue to operate with impunity. The United States and Mexico have cooperated in the development of a binational counternarcotics drug strategy, which was released in February 1998. This strategy contains 16 general objectives, such as reducing the production and distribution of illegal drugs in both countries and focusing law enforcement efforts against criminal organizations. Since the issuance of the binational strategy, a number of joint working groups, made up of U.S. and Mexican government officials, have been formed to address matters of mutual concern. A primary function of several of these working groups was to develop quantifiable performance measures and milestones for assessing progress toward achieving the objectives of the strategy. The performance measures were released during President Clinton’s February 15, 1999, visit to Mexico. A binational law enforcement plenary group was also established to facilitate the exchange of antidrug information. Despite these cooperative efforts, information exchange remains a concern by both governments because some intelligence and law enforcement information is not shared in a timely manner, which impedes drug trafficking operations. Operation Casablanca created tensions in relations between the two countries because information on this undercover operation was not shared with Mexican officials. In the aftermath of Operation Casablanca, the United States and Mexico have taken action to strengthen communications between the two countries. An agreement reached by the U.S. and Mexican Attorneys General (commonly referred to as the “Brownsville Letter”) calls for (1) greater information-sharing on law enforcement activities; (2) providing advance notice of major or sensitive cross-border activities of law enforcement agencies; and (3) developing training programs addressing the legal systems and investigative techniques of both countries. Data for 1998 show that Mexico has, for the most part, not significantly increased its eradication of crops and seizures of illegal drugs since 1995. While Mexico did increase its eradication of opium poppy, eradication of other crops and seizures have remained relatively constant. Cocaine seizures in 1998 were about one-third lower than in 1997. However, the large seizure amount in 1997 was attributable, in part, to two large cocaine seizures that year. Last year I testified that the government of Mexico took a number of executive and legislative actions, including initiating several anticorruption measures, instituting extradition efforts, and passing various laws to address illegal drug-related activities. I also said that it was too early to determine their impact, and challenges to their full implementation remained. While some progress has been made, implementation challenges remain. I testified last year that corruption was pervasive and entrenched within the justice system—that has not changed. According to U.S. and Mexican law enforcement officials, corruption remains one of the major impediments affecting Mexican counternarcotics efforts. These officials also stated that most drug-trafficking organizations operate with impunity in parts of Mexico. Mexican traffickers use their vast wealth to corrupt public officials and law enforcement and military personnel, as well as to inject their influence into the political sector. For example, it is estimated that the Arelleno-Felix organization pays $1 million per week to Mexican federal, state, and local officials to ensure the continued flow of drugs to gateway cities along Mexico’s northwest border with the United States. A recent report by the Attorney General's Office of Mexico recognized that one basic problem in the fight against drug trafficking has been "internal corruption in the ranks of the federal judicial police and other public servants of the Attorney General's Office." As we reported last year, the President of Mexico publicly acknowledged that corruption is deeply rooted in the nation's institutions and general social conduct, and he began to initiate reforms within the law enforcement community. These include (1) reorganizing the Attorney General’s office and replacing the previously discredited drug control office with the Special Prosecutor’s Office for Crimes Against Health; (2) firing or arresting corrupt or incompetent law enforcement officials; (3) establishing a screening process to filter out corrupt law enforcement personnel; and (4) establishing special units within the military, the Attorney General’s Office, and the Secretariat of Hacienda—the Organized Crime Unit, the Bilateral Task Forces and Hacienda’s Financial Analysis Unit—to investigate and dismantle drug-trafficking organizations in Mexico and along the U.S.-Mexico border and investigate money-laundering activities. Additionally, the President expanded the counternarcotics role of the military. The Organized Crime Unit and the Bilateral Task Force were involved in several counternarcotics operations in 1998, for example, the capture of two major narcotics traffickers and the recent seizure of properties belonging to alleged drug traffickers in the Cancun area, as well as the seizure of money, drugs, and precursor chemicals at the Mexico City Airport. However, many issues still need to be resolved—some of them the same as we reported last year. For example, there continues to be a shortage of Bilateral Task Force field agents as well as inadequate Mexican government funding for equipment, fuel, and salary supplements for the agents. (Last year the Drug Enforcement Administration provided almost $460,000 to the Bilateral Task Forces to overcome this lack of support); the Organized Crime Unit remains significantly short of fully screened there have been instances of inadequate coordination and communications between Mexican law enforcement agencies, and Mexico continues to face difficulty building competent law enforcement institutions because of low salaries and the lack of job security. Additionally, increasing the involvement of the Mexican military in law enforcement activities and establishing screening procedures have not been a panacea for the corruption issues facing Mexico. A number of senior Mexican military officers have been charged with cooperating with narcotics traffickers. One of the most notable of these was General Jesus Gutierrez Rebollo, former head of the National Institute for Combat Against Drugs—the Mexican equivalent of the U.S. Drug Enforcement Administration. In addition, as we reported last year, some law enforcement officials who had passed the screening process had been arrested for illegal drug-related activities. In September 1998, four of the Organized Crime Unit's top officials, including the Unit's deputy director, were re-screened and failed. Two are still employed by the Organized Crime Unit, one resigned, and one was transferred overseas. Since my testimony last year, no major Mexican national drug trafficker has been surrendered to the United States. In November 1998, the government of Mexico did surrender to the United States a Mexican national charged with murdering a U.S. Border Patrol officer while having about 40 pounds of marijuana in his possession. However, U.S. and Mexican officials agree that this extradition involved a low-level trafficker who, unlike other traffickers, failed to use legal mechanisms to slow or stop the extradition process. According to the Justice Department, Mexico has approved the extradition of eight other Mexican nationals charged with drug-related offenses. They are currently serving criminal sentences, pursuing appeals, or are being prosecuted in Mexico. U.S. and Mexican officials expressed concern that two recent judicial decisions halting the extradition of two major traffickers represented a setback for efforts to extradite Mexican nationals. The U.S. officials stated that intermediate courts had held that Mexican nationals cannot be extradited if they are subject to prosecution in Mexico. U.S. officials believe that these judicial decisions could have serious consequences for the bilateral extradition relationship between the two countries In November 1997, the United States and Mexico signed a temporary extradition protocol. The protocol would allow suspected criminals who are serving sentences in one country and are charged in the other to be temporarily surrendered for trial while evidence is current and witnesses are available. To become effective, the protocol required approval by the congresses of both countries. The U.S. Senate approved the protocol in October 1998; however, the protocol has not yet been approved by the Mexican congress. According to U.S. and Mexican officials, the 1996 organized crime law has not been fully implemented, and its impact is not likely to be fully evident for some time. According to U.S. law enforcement officials, Mexico has made some use of the plea bargaining and wiretapping provisions of the law. However, U.S. and Mexican law enforcement officials pointed to judicial corruption as slowing the use of the wiretapping provision and have suggested the creation of a corps of screened judges, who would be provided with extra money, security, and special arrangements to hear cases without fear of reprisals. Additionally, results of Mexico's newly created witness protection program are not encouraging—two of the six witnesses in the program have been killed. U.S. and Mexican officials continue to believe that more efforts need to be directed toward the development of a cadre of competent and trustworthy judges and prosecutors that law enforcement organizations can rely on to effectively carry out the provisions of the organized crime law. U.S. agencies continue to provide assistance in this area. Mexico has begun to successfully implement the currency and suspicious transaction reporting requirements, resulting in what U.S. law enforcement officials described as a flood of currency and suspicious transaction reporting. Mexican officials also indicated that Operation Casablanca resulted in a greater effort by Mexican banks to adhere to anti- money-laundering regulations. However, U.S. officials remain concerned that there is no requirement to obtain and retain account holders’ information for transactions below the $10,000 level. No data is available on how serious this problem is and there is no reliable data on the magnitude of the money-laundering problem. Between May 1996 and November 1998, the Mexican government issued 35 indictments and/or complaints on money-laundering charges; however, only one case has resulted in a successful prosecution. The remaining 34 cases are still under investigation or have been dismissed. Last year we reported that the new chemical control law was not fully implemented due to the lack of an administrative infrastructure for enforcing its provisions. This is still the case. Mexico is currently in the process of developing this infrastructure as well as the guidelines necessary to implement the law. However, U.S. officials remain concerned that the law does not cover the importation of finished products, such as over-the-counter drugs that could be used to make methamphetamines. Over the past year, Mexico has announced a new drug strategy and instituted a number of new counternarcotics initiatives. The government of Mexico also reported that it has channeled significant funds—$754 million during 1998—into its ongoing campaign against drug trafficking. Mexico also indicated that it will earmark about $770 million for its 1999 counternarcotics campaign. During 1998 and 1999, the government of Mexico announced a number of new initiatives. For example, a federal law for the administration of seized, forfeited and abandoned goods that will allow authorities to use proceeds and instruments seized from crime organizations for the benefit of law enforcement is being considered, a federal law that will establish expedited procedures to terminate corrupt law enforcement personnel is also being considered, and the government of Mexico recently announced the creation of a new national police force. In addition, the government of Mexico has initiated an operation to seal three strategic points in Mexico. The purpose of the program is to prevent the entry of narcotics and diversion of precursor chemicals in the Yucatan peninsula, Mexico's southern border, and the Gulf of California. Furthermore, the Mexican government recently announced a counternarcotics strategy to crack down on drug traffickers. Mexico indicated that it plans to spend between $400 million and $500 million over the next 3 years to buy new planes, ships, radar and other military and law enforcement equipment. In addition to the new spending, Mexico reported that its new antidrug efforts will focus on improving coordination among law enforcement agencies and combating corruption more efficiently. A senior Mexican government official termed this new initiative a “total war against the scourge of drugs.” Last year we noted that while U.S.-provided assistance had enhanced the counternarcotics capabilities of Mexican law enforcement and military organizations, the effectiveness and usefulness of some assistance were limited. For example, two Knox-class frigates purchased by the government of Mexico lacked the equipment needed to ensure the safety of the crew, thus making the ships inoperative. We also reported that the 73 UH-1H helicopters provided to Mexico to improve the interdiction capability of Mexican army units were of little utility above 5,000 feet, where significant drug-related activities and cultivation occur. In addition, we noted that four C-26 aircraft were provided to Mexico without the capability to perform intended surveillance missions and without planning for payment for the operation and maintenance of the aircraft. Mr. Chairman, let me bring you up to date on these issues. The two Knox-class frigates have been repaired and are in operation. According to U.S. embassy officials, the government of Mexico is considering the purchase of two additional frigates. However, other problems remain. For example, in late March 1998, the U.S. Army grounded its entire UH-1H fleet until gears within the UH-1H engines could be examined and repairs could be made. The government of Mexico followed suit and grounded all of the U.S.-provided UH-1H helicopters until they could be examined. The helicopters were subsequently tested, with 13 of the Attorney General’s 27 helicopters and 40 of the military’s 72 helicopters receiving passing grades. According to Department of Defense officials, the helicopters that passed the engine tests could be flown on a restricted basis. U.S. embassy officials told us that the Office of the Attorney General has been flying its UH-1H helicopters on a restricted basis, but the Mexican military has decided to keep its entire fleet grounded until all are repaired. Finally, the four C-26 aircraft still are not being used for counternarcotics operations. This concludes my prepared remarks. I would be happy to respond to any questions you may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary, VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO discussed the counternarcotics efforts of the United States and Mexico, focusing on: (1) Mexico's efforts in addressing the drug threat; and (2) the status of U.S. counternarcotics assistance provided to Mexico. GAO noted that: (1) while some high-profile law enforcement actions were taken in 1998, major challenges remain; (2) new laws passed to address organized crime, money laundering, and the diversion of chemicals used in narcotics manufacturing have not been fully implemented; (3) moreover, during 1998, opium poppy eradication and drug seizures remained at about the same level as in 1995; (4) in addition, no major Mexican drug trafficker was surrendered to the United States on drug charges; (5) Mexican government counternarcotics activities in 1998 have not been without positive results; (6) one of its major accomplishments was the arrest of two major drug traffickers commonly known as the Kings of Methamphetamine; (7) although all drug-related charges against the two have been dropped, both are still in jail and being held on extradition warrants; (8) the Mexican foreign ministry has approved the extradition of one of the traffickers to the United States, but he has appealed the decision; (9) in addition, during 1998 the Organized Crime Unit of the Attorney General's Office conducted a major operation in the Cancun area where four hotels and other large properties allegedly belonging to drug traffickers associated with the Juarez trafficking organization were seized; (10) Mexico also implemented its currency and suspicious reporting requirements; (11) the Mexican government has proposed or undertaken a number of new initiatives; (12) it has initiated an effort to prevent illegal drugs from entering Mexico, announced a new counternarcotics strategy and the creation of a national police force; (13) one of the major impediments to U.S. and Mexican counternarcotics objectives is Mexican government corruption; (14) recognizing the impact of corruption on law enforcement agencies, the President of Mexico: (a) expanded the role of the military in counternarcotics activities; and (b) introduced a screening process for personnel working in certain law enforcement activities; (15) since these initiatives, a number of senior military and screened personnel were found to be either involved in or suspected of drug-related activities; (16) since 1997, the Departments of State and Defense have provided Mexico with over $92 million worth of equipment, training, and aviation spare parts for counternarcotics purposes; and (17) the major assistance included UH-1H helicopters, C-26 aircraft, and two Knox-class frigates purchased by the government of Mexico through the foreign military sales program. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Agricultural trade can be classified into two categories—bulk commodities and high-value products. Bulk commodities are raw agricultural products that have little value added after they leave the farm gate. High-value products, by contrast, either require special care in packing and shipping or have been subjected to processing. High-value products constitute the fastest growing component of the world’s agricultural trade. By 1998, they are expected to represent 75 percent of world agricultural trade, according to FAS. The United States’ greatest strength in agricultural exports has traditionally been in bulk commodities, and it has consistently operated as the world’s largest exporter of them. However, the member nations of the European Union (EU) constitute the world’s largest exporter of high-value agricultural products (see app. I for a list of the 12 top exporters of high-value products in 1992). Because purchasing decisions for bulk commodities are based largely on price, success in exporting them depends primarily on maintaining a cost advantage in their production and transport. Because HVP purchasing decisions depend on product attributes, such as brand-name packaging and quality image, in addition to price, success in the export of HVPs is based more on the exporter’s skill in developing and marketing the product. Exporting countries have a variety of programs and organizations to assist exporters in developing markets for high-value products. While the recent multilateral trade agreement of the Uruguay Round (UR) of the General Agreement on Tariffs and Trade (GATT) would limit the extent to which countries could provide subsidies to the agricultural sector, it would not limit the extent to which countries could fund market development activities. As the UR agreement reduces export subsidies, market development efforts may become a more important component in increasing agricultural exports. To obtain information to meet our objectives, we conducted telephone interviews and met in the United States with officials of foreign marketing organizations and the embassies of the four European countries we reviewed. We also analyzed reports by, and conducted telephone interviews with, FAS attachés posted in the four countries. To learn about the activities of the United States, we met with representatives of USDA’s FAS and Economic Research Service (ERS) in Washington, D.C., and conducted telephone interviews with representatives of regional trade associations. Appendix V contains a more detailed description of our objectives, scope, and methodology. We did our work between February and August 1994 in accordance with generally accepted government auditing standards. We obtained oral agency comments from FAS. These comments are discussed at the end of this letter. The structure for foreign market development of HVPs is fundamentally different in the United States than in three of the four European countries we reviewed. France, Germany, and the United Kingdom each rely primarily on a centralized marketing organization to promote their agricultural exports. The organizations are funded either entirely through user fees and levies on private industry, as with Germany, or through a combination of private and public funds, as with France and the United Kingdom. Both public and private sector representatives play a role in managing the marketing organizations. They conduct a number of different types of promotions, provide an array of services to exporters, and promote nearly all high-value products and commodities. The Netherlands does not have a single primary market development organization but rather a number of independent commodity boards and trade associations. These boards and associations, in coordination with the government, do most of that country’s foreign market development. (See app. II for a more detailed description of foreign market development by these four countries.) In France, the Société pour l’Expansion des Ventes des Produits Agricoles et Alimentaires (SOPEXA) is responsible for foreign market development. Jointly owned by the French government and private trade organizations, SOPEXA promotes French food and wine in about 23 foreign countries. The Ministry of Agriculture has ultimate control over SOPEXA and sits on its board of directors, but French officials said the Ministry has minimal influence over SOPEXA’s day-to-day operations and activities. In addition to SOPEXA, France has a quasi-government agency, the Centre Français du Commerce Extérieur (CFCE), that assists exporters of industrial and agricultural products by doing market research and providing foreign market information. Like France, Germany promotes most of its HVP exports through a quasi-governmental agency, the Centrale Marketinggesellschaft der deutschen Agrarwirtschaft (CMA). CMA maintains offices in eight foreign countries and generically promotes most German food and agricultural products. CMA is run by representatives of the German food industry and is guided by a council composed of both industry and government representatives. The wine and forestry industries have their own marketing boards, which also do foreign market development. Most HVP foreign market development in the United Kingdom is undertaken by Food From Britain, an organization created by the British government to centralize and coordinate agricultural marketing activities. It is controlled by a council appointed by the Ministry of Agriculture, Fisheries and Food and has offices in seven foreign countries. The Meat and Livestock Commission also conducts foreign market development activities of its own. In the Netherlands, several independent commodity boards and trade associations, which operate without government control, administer most activities for HVP foreign market development. The Ministry of Agriculture, Nature Management and Fisheries helps coordinate the promotional activities of the commodity boards and trade associations and also conducts some foreign market development activities of its own. In the United States, not-for-profit trade associations have primary responsibility for conducting their own marketing activities in foreign countries. USDA provides funding to support their export activities through its Market Promotion Program (MPP) and the Foreign Market Development Program, also known as the Cooperator Program. MPP provides money to the trade associations to conduct generic promotions or to fund private companies’ brand-name promotions. MPP activities are predominantly for high-value products. The Cooperator Program provides financial and technical support to U.S. cooperators, representing about 40 specific commodity sectors, who work at overseas offices to increase long-term access to and demand for U.S. products. The program is mostly aimed at promoting bulk commodities, but a portion of the program’s budget supports HVP market development (see app. III for a more detailed discussion of U.S. foreign market development). USDA’s Foreign Agricultural Service administers these programs and provides funding, but the individual trade associations themselves are generally responsible for carrying out the export activities. FAS conducts some promotional activities of its own and provides some services to exporters through its AgExport Services Division and its foreign attaché service. Although the Europeans, according to FAS, provide greater total support for agriculture in general, the four European countries we reviewed spent less in 1993 on foreign market development than did the United States, both in absolute terms and in proportion to their HVP exports. The total spending in 1993 on HVP market development in the four competitor countries varied considerably, from about $13 million for the United Kingdom to about $76 million for France, based on estimates by FAS and information provided by the foreign marketing organizations. The United States, by comparison, spent about $151 million in 1993 on generic or nationally oriented foreign market development for high-value products, mostly through the Market Promotion Program. Available information shows that the United States spent more than the four European countries, not just in terms of absolute dollars, but also as a percentage of HVP exports. While the United States spent about $65 in 1993 on foreign market development for every $10,000 in HVP exports, France spent about $30, the Netherlands about $21, Germany about $19, and the United Kingdom about $15 (see table 1). Because so many factors influence a country’s export levels, these figures alone are not sufficient to make judgments about the effectiveness of the countries’ foreign market development programs. The four European countries we reviewed relied largely on private funds, rather than government expenditures, in 1993 for their HVP market development. The European marketing organizations that promoted high-value products included various types of public-private partnerships. In all cases, however, the organizations were financed, at least in part, either through user fees or a system of mandatory levies on the agricultural industry. The sectors of agribusiness that paid the levies varied by country. They typically included producers but also sometimes included processors, wholesalers, or traders. The annual government expenditures for foreign market development ranged from zero to $29 million in 1993 in the four European countries we reviewed, according to estimates by FAS and information provided by the foreign marketing organizations. The portion of the country’s total foreign market development that was funded by government expenditures ranged from zero percent to 42 percent. By contrast, the U.S. government spent about $121 million on HVP foreign market development in 1993, representing about 80 percent of all U.S. spending on foreign market development for HVPs. In France, about 38 percent of total foreign market development for agriculture was funded by government expenditures in 1993. About 35 percent of the 1993 budget of SOPEXA, the export promotion agency, was provided by the government; the remainder came from producers or producer groups who benefited from SOPEXA’s promotions and who collected funds from producer levies. Government expenditures also funded 65 percent of CFCE, the market information agency, with the remainder coming from user fees. In Germany, CMA, the quasi-governmental export promotion agency, did not receive public funds in 1993. For many years, the agency has been financed entirely through compulsory levies on agricultural producers and processors. In the United Kingdom, about 42 percent of total foreign market development for HVPs was paid for by public funds. Food From Britain received about 60 percent of its funding in 1993 from government expenditures, with the rest coming from commodity marketing boards and user fees from individual exporters who requested services. The Meat and Livestock Commission, which also does export promotion of its own, received about 12 percent of its budget from government expenditures. In the Netherlands, more than 90 percent of foreign market development expenditures in 1993 were made by commodity boards and trade associations, which raised money through levies on producers and traders. The remaining market development activity was conducted by the Netherlands’ Ministry of Agriculture, Nature Management and Fisheries. In the United States, government expenditures funded an estimated 80 percent of total HVP foreign market development in 1993. FAS paid 81 percent of the cost of HVP activities sponsored under the Market Promotion Program, while the trade organizations sponsoring the activities contributed the remainder. FAS also contributed 73 percent of the cost of HVP activities for the Cooperator Program. In addition, FAS funded about 62 percent of the $6.1 million in activities sponsored by its AgExport Services Division, which assists in HVP foreign market development. (See app. IV for information about the five countries’ marketing organizations and estimates of their expenditures.) Foreign market development is only one of many factors that influence a country’s success in exporting HVPs. For example, the government expenditures previously cited include spending on foreign market development activities, such as market research and consumer promotion but do not include spending on other kinds of agricultural support and export programs, such as direct export subsidies, domestic subsidies, and price supports. These programs also serve, directly or indirectly, to increase HVP exports, and spending for such programs is estimated by FAS to be far higher in Europe than it is in the United States. According to FAS, total agricultural support spending in 1992 was $46.7 billion in the European Union, compared with $10.9 billion in the United States. Furthermore, the bulk of agricultural exports of the four European countries we reviewed went to other European Union members. For several reasons, an EU producer is likely to have an easier time exporting to another EU country than a U.S. producer would. The EU’s Common Agricultural Policy has created a unified set of trade regulations and eliminated among members most tariff and nontariff trade barriers, making trade between EU members somewhat comparable to U.S. interstate commerce. European producers are also more likely to be familiar with the consumer preferences, customs, and distribution systems of other European countries. Moreover, because of the vast domestic market in the United States, U.S. producers may be less likely to seek out export markets than European producers, who have smaller domestic markets and often have a long history of exporting a substantial portion of their production. The U.S. and European marketing organizations we reviewed carry out similar foreign market development activities, though the emphasis they put on the various activities differs. The activities conducted generally included market research, consulting services, trade servicing, consumer promotions, advertising, and sponsorship at trade shows. Market research is often considered the foundation of market development. It is conducted to determine the potential demand for a particular product, to assess consumer preferences, or to develop statistical information on agricultural trade and economics. Consulting services may be offered to provide advice to exporters on appropriate promotions and to help exporters learn about the laws, regulations, and requirements of particular markets. Trade servicing involves developing trade leads to match up exporters with appropriate importers. In addition, some organizations advertise their country’s products in trade journals and other publications in order to support retail promotion strategies and to enhance the image and awareness of their country’s products. Consumer-oriented activities include in-store promotions, where advertising materials and product samples are distributed at point-of-sale locations. These activities may serve either to promote a particular product or to enhance the overall image of a country’s food products. Additionally, some organizations provide retail stores with advertising displays and decorations. Some countries’ marketing organizations also do direct consumer advertising on television, on radio, or in print. Finally, marketing organizations assist their exporters by coordinating or subsidizing their participation in international trade shows. Trade shows allow exporters to test a market, meet potential buyers, and monitor the competition. In general, the U.S. programs place more emphasis on consumer advertising than do the European programs. MPP funds are often used by U.S. companies or producer groups to finance product advertising campaigns, which tend to be an expensive form of market promotion. Representatives of the European marketing organizations generally told us that consumer advertising was too costly, given their limited budgets. They focused more on influencing wholesalers and usually placed a higher priority on trade shows. They attempted to reach consumers more through vehicles such as in-store promotions than through direct media advertising. In our 1990 review of foreign market development organizations, we reported that many other nations integrated their foreign market development activities—coordinating their market research, promotional activities, and production capabilities to meet consumer demand in foreign markets. U.S. producers and producer groups did not coordinate their activities in the same manner, nor did they strategically target markets as did some of their competitors. This may be because European marketing organizations, such as France’s SOPEXA and Germany’s CMA, promote nearly all agricultural products and thus can develop integrated marketing plans for increasing their countries’ HVP exports. The system of foreign market development in the United States is far more decentralized. As we have reported, USDA has been slow to develop a USDA-wide marketing strategy that would assist U.S. producers in becoming more coordinated and marketing oriented in their approach to promoting U.S. exports. The European organizations we reviewed perform little formal, quantified evaluation of their HVP promotion efforts. Representatives of foreign market development organizations we contacted all said that quantifying the overall success of foreign market development is extremely difficult because of the large number of variables that affect a country’s exports. Instead, evaluations of foreign market development programs are based more on the subjective observations and judgments of marketing staff and on the satisfaction of producers involved in the promotional efforts. Representatives of the foreign organizations said they do such things as conduct surveys of trade show participants to gauge their satisfaction or measure the number of buyer contacts that result from an advertisement in a trade journal. USDA attempts to measure the effectiveness of activities funded under MPP by evaluating the results of participants’ ongoing activities against measurable goals provided in the participants’ funding proposals. USDA said it is also developing a methodology that would identify activities that have not been effective in expanding or maintaining market share. The methodology would include a statistical analysis that would compare export sales with a participant’s MPP expenditures in both overall and individual markets. In addition, an FAS official told us that an econometric model is under development that would evaluate the effectiveness of MPP participants’ expenditures in increasing U.S. exports. We discussed the information in this report with FAS officials, including the Administrator, on September 9, 1994, and incorporated their comments where appropriate. FAS generally agreed with the report’s findings. FAS emphasized that the UR agreement may lead European governments to increase their funding of foreign market development in the near future. FAS said some European governments may try to shift funds previously spent on export subsidies, which would be restricted under this agreement, to market promotion programs, which would not be directly restricted under the UR agreement. FAS said it will be closely monitoring such spending as the UR agreement goes into effect. We are sending copies of this report to the Secretary of Agriculture and other interested parties. We will make copies available to others upon request. If you have any questions concerning this report, please contact me at (202) 512-4812. The major contributors to this report are listed in appendix VI. Foreign market development organizations are characterized by various organizational and funding structures. The organizations generally consist of some form of public-private partnership funded by some combination of government funds, user fees, and legislated levies on private industry. We reviewed the organizations that do foreign market development in four European countries: (1) France, (2) Germany, (3) the United Kingdom, and (4) the Netherlands. France was the world’s second largest high-value product exporter in 1992, with more than 70 percent of its agricultural exports going to other European Union (EU) countries. Wine, cheese, and meats were among its major HVP exports. France has a very strong food-processing sector and enjoys a reputation for aggressive and well-focused foreign market development. The majority of French HVP foreign market development is conducted by the Société pour l’Expansion des Ventes des Produits Agricoles et Alimentaires (SOPEXA), whose mission is the expansion of export markets for French food and wine. SOPEXA is jointly owned by the French government and various agricultural trade organizations, but the government has minimal influence on its day-to-day operations. About 35 percent of SOPEXA’s budget came from the Ministry of Agriculture in 1993; the remainder came from producers or producer groups that benefited from SOPEXA’s promotions and that collect funds from product levies. SOPEXA has offices in about 23 foreign countries. Its foreign market development expenditures in 1993 were about $68.6 million. The Centre Français du Commerce Extérieur (CFCE) is a quasi-government agency that seeks to increase exports by providing statistical information, market studies, and consulting services to French exporters. About 15 percent of its activity relates to food and agricultural exports. CFCE provides its services to both public agencies, such as the Ministry of Agriculture and SOPEXA, and to private exporters, who funded about 35 percent of CFCE’s budget in 1993 through user fees for the services they receive. CFCE spent about $7 million of its budget in 1993 on activities related to food and agriculture. It had about 180 foreign offices, the majority staffed by French commercial attachés. The U.S. Department of Agriculture’s Foreign Agricultural Service (FAS) office in Paris said it expects the French government to continue its strong support for foreign market development through SOPEXA and that there is likely to be an increased emphasis on the promotion of wine, cheese, and other highly processed food items. At the same time, government funding for CFCE is expected to gradually decline as private sector financing of its activities increases. Germany is a sophisticated food processor and was the world’s fourth largest exporter of high-value agricultural products in 1992. Its major HVP exports included milk, cheese, meats, and processed foods. More than two-thirds of its agricultural exports went to other EU countries in 1993. Foreign market development is conducted by the Centrale Marketinggesellschaft der deutschen Agrarwirtschaft (CMA), a quasi-governmental agency that does national generic promotions for most German food and agricultural products. CMA is funded by mandatory legislated levies on agricultural producers and processors, as well as by user fees. It is directed by a supervisory board composed of representatives of industry and government. The board appoints CMA’s top managers. CMA is known for the breadth of its services, which it provides to a broad spectrum of the German agricultural industry, including the producer, processor, retailer, and exporter. Its marketing efforts include not just product promotion but also market research and distribution. CMA represents nearly all agricultural products, with the exception of wine and forest products; these have their own independent marketing boards. In 1993, CMA spent an estimated $32 million on foreign market development. All of its funds came from the private sector through mandatory levies; the government provided no funds for foreign market development of HVPs. In addition, the Wine Marketing Board spent approximately $6.3 million, and the Forestry Marketing Board an estimated $400,000, on foreign market development. The United Kingdom was the world’s ninth largest HVP exporter in 1992. Its major high-value product exports included alcoholic beverages and meat, and more than 60 percent of its 1992 agricultural exports went to other EU nations. Promotion of agricultural exports is mostly the responsibility of Food From Britain, a quasi-governmental corporation created in 1983 to centralize and coordinate the United Kingdom’s agricultural marketing efforts. The organization is overseen by a council composed of industry representatives who are appointed by the Minister of Agriculture, Fisheries and Food. Food From Britain has offices in seven foreign countries. Its activities include retail promotions, seminars, media events, and consulting services. In 1993, Food From Britain spent about $7.9 million on foreign market development. About 60 percent of its budget came from a government grant. Most of the rest came from contributions by commodity organizations and from user fees from exporters who benefited from Food From Britain’s services. A separate organization, the Meat and Livestock Commission, also does foreign market development, totaling about $4.6 million in 1993. The United Kingdom’s HVP foreign market development spending is small relative to the other European countries and the United States. According to the FAS office in London and British officials that we spoke with, there has been increasing public discussion in the United Kingdom about the need to more aggressively promote agricultural exports. Food From Britain is expected to focus almost exclusively on export promotion, leaving domestic promotional activities to other organizations, according to its U.S. representative. In addition, according to an official from the Ministry of Agriculture, Fisheries and Food, the government is committed to reducing Food From Britain’s reliance on government funding and to have it rely more on private industry funding. At the same time, however, FAS said the British government is considering starting a new program to help fund foreign market development for agricultural products. The Netherlands was the world’s largest exporter of high-value agricultural products in 1992. Its major exports were meats, dairy products, fresh vegetables, and cut flowers. More than 70 percent of its total agricultural exports went to EU countries in 1992. The majority of Dutch HVP foreign market development is conducted through commodity boards or industry trade associations, such as the Dutch Dairy Bureau and the Flower Council of Holland. These organizations are independent of government control and are funded through levies on producers, wholesalers, processors, and traders. The combined export promotion budgets for these organizations in 1993 were estimated at $59.3 million. Most of the promotional activity was targeted at other EU nations. The Dutch Ministry of Agriculture, Nature Management and Fisheries also conducts generic promotional activities, usually through its agricultural attachés who are posted abroad. About 50 percent of the Ministry’s $4.8 million promotion budget in 1993 was used to organize trade exhibitions, while trade advertising and in-store promotions accounted for about 15 percent. Other activities included trade servicing and basic market research. The Ministry and the private commodity organizations work together closely and frequently collaborate in their market development activities. Officials at the Dutch embassy in Washington, D.C., and Dutch promotion organizations told us that because of budget constraints, the Dutch government is moving toward privatization of agricultural export promotion. The subsidy provided to exhibitors at trade shows has been reduced, and the Ministry has diminished its role in market reporting and trade leads, increasingly turning those functions over to the private trade associations. Most foreign market development of U.S. high-value products is carried out by not-for-profit trade associations. These associations typically promote a single commodity or group of related commodities and are generally financed, at least in part, through producer contributions. The trade associations receive most of their funds for foreign market development from the U.S. government via USDA’s Market Promotion Program (MPP). MPP operates through not-for-profit trade associations that either conduct generic promotions themselves or pass funds along to for-profit companies to conduct brand-name promotions. Promotional activities under MPP include such things as market research, retail promotions, and consumer advertising. In 1993, U.S. producers and trade associations spent about $136.5 million on overseas promotional activities for high-value products sponsored by MPP. The government paid about 81 percent of this cost, or about $111 million, and program participants, who are required to share in the cost of their promotions, paid the rest. In addition, some not-for-profit trade associations conducted foreign market development activities that were independent of MPP. USDA’s Foreign Market Development Program, also known as the Cooperator Program, provides funds to about 40 cooperators representing specific U.S. commodity sectors. These cooperators work overseas to build markets for U.S. agricultural products through such activities as trade servicing, technical assistance, and consumer promotions. The Cooperator Program supports mostly bulk products, but a portion of funds for the program went to promote high-value products in 1993. USDA funding for high-value product market development under the Cooperator Program was about $6 million in 1993. The cooperators contributed an additional $2 million. USDA’s Foreign Agricultural Service has the primary government role in market development and promotion of HVPs. In addition to administering MPP and the Cooperator Program, FAS provides a variety of services to U.S. agricultural exporters. Among these are a database that lists foreign buyers and U.S. suppliers, FAS publications that highlight trade opportunities in export markets, and support or sponsorship of international trade shows. In addition, FAS maintains an overseas network of about 75 attaché posts and agricultural trade offices that seek to increase U.S. agricultural exports through commodity reporting, trade policy work, and market development activities. FAS’ AgExport Services Division provided about $3.8 million in 1993 to these overseas offices to fund such promotional activities as trade shows, trade servicing, consumer promotions, publications, and trade missions. Through user fees, exporters contributed an additional $2.3 million to these activities. Government division that supports promotional activities at overseas posts (Table notes on next page) Our objectives were to obtain information on (1) the organizations in France, Germany, the United Kingdom, and the Netherlands that help develop foreign markets for high-value agricultural products; (2) the programs of the U.S. Department of Agriculture for HVP foreign market development; and (3) the ways in which these five countries’ programs are evaluated to determine their effectiveness in increasing exports. To obtain information on the foreign market development efforts of France, Germany, the United Kingdom, and the Netherlands, we conducted telephone interviews and met in the United States with officials of foreign marketing organizations and the embassies of the four countries. We also analyzed reports by, and conducted telephone interviews with, FAS attachés posted in the four countries. In addition, we conducted a literature search of information related to foreign market development. To learn about the foreign market development activities of the United States, we reviewed relevant FAS documents and legislation and met with FAS representatives in Washington, D.C. In addition, we conducted telephone interviews with representatives of regional trade associations and met with representatives of USDA’s Economic Research Service. Because of the inherent difficulties in determining the effectiveness of market development activities, and because of our limited time frame, we did not evaluate the effectiveness of the European or U.S. market development activities. However, we did discuss with the countries’ program officials in the United States how they evaluated and determined the effectiveness of their programs. We also discussed U.S. efforts to evaluate promotion activities with representatives of FAS and reviewed documents describing their evaluation methodologies. Our review looked only at market development and promotion activities, which include such activities as consumer promotion, trade servicing, and market research. It did not include export subsidies, domestic subsidies, and internal price supports. The budgets of some of the foreign market development organizations we reviewed, such as Food From Britain and the Netherlands’ Ministry of Agriculture, Nature Management and Fisheries, were public information. However, the expenditures of certain other foreign organizations, such as Germany’s CMA and France’s SOPEXA, were not made public. We received estimates of their budgets from FAS staff overseas. We did not independently verify the budget estimates. We did, however, attempt to corroborate the estimates with representatives of the foreign organizations and with other sources. In some cases, the budgets of foreign market organizations did not clearly delineate between domestic versus export promotion, or bulk versus high-value product promotion. In these cases, we worked with FAS to provide a best estimate of the portion of the budget devoted to foreign market development of high-value products. There is no uniform scheme for classifying agricultural products, and there are various definitions for what constitutes a high-value product. The numbers used in this report for exports of U.S. and European HVPs are based on analysis by USDA’s Economic Research Service of data from the Food and Agriculture Organization of the United Nations. For the purposes of these 1992 export statistics, ERS’ definition of HVPs included semiprocessed foods, such as wheat flour and vegetable oil, but excluded certain products that did not meet ERS’ statistical definition of an agricultural product. Thus the HVP export data for 1992 did not include cigarettes, distilled spirits, fishery products, or forestry products. Trade statistics sometimes exclude intra-EU trade, since this trade is sometimes viewed as comparable to U.S. interstate commerce. However, we have included intra-EU trade in our trade statistics, since the European organizations we reviewed treat trade with other EU countries as foreign (as opposed to domestic) market development, and since a considerable portion of their export promotion activity is within the EU. C. Jeffrey Appel, Evaluator-in-Charge Jason Bromberg, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. | Pursuant to a congressional request, GAO reviewed the structure, funding, and promotional activities of the organizations that develop foreign markets for high-value agricultural products (HVP), focusing on the: (1) organizations in France, Germany, the United Kingdom, and the Netherlands that help develop foreign markets for HVP; (2) Department of Agriculture's (USDA) foreign market development programs; and (3) ways in which these countries' programs are evaluated to determine their effectiveness in increasing exports. GAO found that: (1) France, Germany, and the United Kingdom each have an integrated market development organization that provides an array of services and promotes most agricultural products; (2) the Netherlands relies primarily on independent commodity associations to promote its agricultural products; (3) all of the countries spent less on foreign market development than the United States in 1993; (4) because so many factors influence a country's export levels, information on promotion expenditures alone is not sufficient to determine the effectiveness of a country's foreign market development efforts; (5) the countries' foreign market development programs are financed mostly by the private sector, while U.S. foreign market development programs are coordinated by the USDA Foreign Agricultural Service; and (6) the market development organizations reviewed and the United States generally engage in the same kinds of promotional activities, including market research, trade shows, consumer promotions, and trade servicing. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Credit unions are tax-exempt, cooperative financial institutions run by member-elected, primarily volunteer boards. To build capital, credit unions do not issue stock; they are not-for-profit entities that build capital by retained earnings. Their tax-exempt status and cooperative, not-for- profit structure separate credit unions from other depository institutions. Like banks and thrifts, credit unions are either federally or state chartered. Prior to the financial crisis, the credit union system consisted of three tiers, as shown in figure 1. As of December 31, 2007, there were 8,101 credit unions, 27 corporate credit unions, and 1 wholesale corporate credit union—U.S. Central Federal Credit Union (U.S. Central). Credit unions are owned by individual members (natural persons) who make share deposits and are provided with products and services, such as lending, investments, and payment processing. Credit unions are subject to limits on their membership because members must have a “common bond”—for example, working for the same employer or living in the same community. Corporates are owned by and serve credit unions. Corporates provide payment processing services and loans for liquidity purposes and serve as repositories for credit unions’ excess liquidity, among other things. In particular, when loan demand is low or deposits are high, credit unions generally invest excess liquidity in corporates and then withdraw funds when loan demand is high or deposits are low. Corporates meet liquidity needs with member deposits and by borrowing from U.S. Central, capital markets, or the Federal Home Loan Banks.Corporates primarily owned by U.S. Central, which functioned as a corporate for the corporates, provide the same depository and other services that corporates provide to credit unions. U.S. Central was the agent group representative for the Central Liquidity Facility (CLF), which we discuss later in this section. U.S. Central also acted as an aggregator of corporate credit union funds, which allowed them better access to the markets at better rates. While the corporate system—including both U.S. Central and the corporates—was designed to meet the needs of credit unions, the corporates face competition from other corporates and financial institutions that can provide needed services. For instance, credit unions may also obtain loans and payment processing from Federal Reserve Banks. In addition, credit unions can obtain investment products and services from broker-dealers or investment firms rather than corporates. Credit union service organizations (CUSO) also compete with corporates and offer, among other things, investments and payment processing. As we reported in 2004, corporates seek to provide their members with higher returns on their deposits and lower costs on products and services than can be obtained individually elsewhere. Credit unions and corporates are insured by NCUSIF, which provides primary deposit insurance for 98 percent of the nation’s credit unions and corporates. NCUA administers NCUSIF, collects premiums from credit unions and corporates to fund NCUSIF, and ensures that all credit unions operate in a safe and sound manner. NCUA is required to maintain NCUSIF’s equity ratio at a percentage of no less than 1.2 percent and not more than 1.5 percent of insured shares. In addition, NCUA provides oversight of the CLF, which lends to credit unions experiencing unusual loss of liquidity. Credit unions can borrow directly from the CLF or indirectly through a corporate, which acts as an agent for its members. U.S. Central was the primary agent for the CLF and was the depository for CLF funds until August 2009, when NCUA changed its investment strategy for the liquidity facility. NCUA supervises and issues regulations on operations and services for federally chartered credit unions and for both state- and federally chartered corporates. NCUA has supervisory and regulatory authority over both state- and federally chartered corporates because they provide services to federally insured credit unions. In addition, NCUA shares responsibility for overseeing state-chartered credit unions to help ensure they pose no risk to the insurance fund. NCUA categorizes corporate supervision into three categories (Types I, II, and III) based on asset size, investment authorities, complexity of operations, and influence on the market or credit union system. For example, a corporate with Type III supervision generally has billions of dollars in assets, exercises expanded investment authorities, maintains complex and innovative operations, and has a significant impact in the marketplace and on the credit union system. NCUA assigns a full-time, on-site examiner to corporates with Type III supervision. agency examinations, performs off-site monitoring, and conducts joint examinations of credit unions with state supervisory agencies. As part of its on-site examinations, NCUA assesses a credit union’s exposure to risk and assigns risk-weighted ratings under the CAMEL rating system. The ratings reflect a credit union’s condition in five components: capital adequacy, asset quality, management, earnings, and liquidity. Each component is rated on a scale of 1 to 5, with 1 being the best and 5 the worst. The five component ratings are then used to develop a single composite rating, also ranging from 1 to 5. Credit unions with composite ratings of 1 or 2 are considered to be in satisfactory condition, while credit unions with composite ratings of 3, 4, or 5 exhibit varying levels of safety and soundness problems. A similar rating system, known as the Corporate Risk Information System, is used to assess the corporates. NCUA has the authority to take an enforcement action against credit unions and corporates to correct deficiencies identified during an examination or as a result of off-site monitoring. NCUA can issue letters of understanding and agreement, which is an agreement between NCUA and the credit union or corporate on certain steps the credit union or corporate will take to correct deficiencies. They can also issue preliminary warning letters, which is an NCUA directive to a credit union or corporate to take certain actions to correct deficiencies. Further, NCUA can issue a cease- and-desist order, which requires a credit union or corporate to take action to correct deficiencies. Although not considered an enforcement action, NCUA examiners also can issue documents of resolution to record NCUA’s direction that a credit union or corporate take certain action to correct a deficiency or issue within a specified period. NCUA also has a number of options for dealing with a credit union or corporate that has severe deficiencies or is insolvent. It can place the institution into conservatorship—that is, NCUA takes over the credit union’s or corporates’ operations. After NCUA assumes control of the institution’s operations, it determines whether the credit union or corporate can continue operating as a viable entity. To resolve a credit union or corporate that is insolvent or no longer viable, NCUA may merge it with or without assistance, conduct a purchase and assumption, or liquidate its assets. In an assisted merger, a stronger credit union or corporate assumes all the assets and liabilities of the failed credit union or corporate with NCUA providing financial incentives or an asset guarantee. In a purchase and assumption, another credit union or corporate purchases specific assets and assumes specific liabilities of the failed corporate or credit union. In liquidation, NCUA sells the assets of a failed credit union or corporate. PCA is a comprehensive framework of mandatory and discretionary supervisory actions for credit unions. PCA is based on five categories and their associated net worth ratios—that is, capital as a percentage of assets (see table 1). If a credit union falls below well capitalized (7 percent net worth), the credit union is required to increase retained earnings. When NCUA determines the credit union is in the undercapitalized, significantly undercapitalized, or critically undercapitalized categories, NCUA is required to take additional mandatory supervisory actions. In addition to these mandatory supervisory actions, NCUA often enforces discretionary supervisory actions. Discretionary supervisory actions are applied to credit unions that fall into the undercapitalized category or below and include requiring NCUA approval for acquisitions or new lines of business, restricting dividends paid to members, and dismissing the credit union’s board members or senior management. Before 2010, U.S. Central and other corporate credit unions were not subject to PCA but were instead required to maintain total capital at a minimum of 4 percent of their moving daily average net assets. Total capital for U.S. Central and corporate credit unions was calculated using any combination of retained earnings, paid-in capital, or membership capital. If total capital fell below this level, NCUA required U.S. Central or the corporate to submit a capital restoration plan. If the capital restoration plan was inadequate or the corporate failed to complete the plan, NCUA could issue a capital directive. A capital directive orders the corporate to take a variety of actions including reducing dividends, ending or limiting lending of certain loan categories, ending or limiting the purchase of investments, and limiting operational expenses in order to achieve adequate capitalization within a specified time frame. From January 1, 2008, to June 30, 2011, 5 corporates and 85 credit unions failed. The five failed corporates—U.S. Central, Western Corporate (Wescorp), Members United, Southwest, and Constitution— were some of the largest institutions within the corporate system, although the credit unions that failed were relatively small. Specifically, these five failed corporates accounted for 75 percent of all corporate assets as of December 31, 2007 (see fig. 2). In contrast, the 85 credit unions that eventually failed represented around 1 percent of all credit unions and less than 1 percent of total credit union assets, as of December 31, 2007. NCUA’s OIG MLRs of the failed corporates and our analysis of historical financial data for the corporate system show that management of both U.S. Central and the failed corporate credit unions made poor investment decisions. Specifically, U.S. Central and the failed corporates overconcentrated their investments in private-label, mortgage-backed securities (MBS), investing substantially more in private-label MBS than corporate credit unions that did not fail (see fig. 3). At the end of 2007, the five failed corporates had invested 31 to 74 percent of their assets in private-label MBS. In particular, Wescorp and U.S. Central had invested 74 percent and 49 percent, respectively, of their portfolio in private-label MBS. In contrast, 10 of the 23 remaining corporates had also invested in private-label MBS but at lower levels—for example, from 1 to 19 percent. These high concentrations of private-label MBS exposed the failed corporates to the highs and lows of the real estate market, which experienced significant losses. Furthermore, corporates had significant deposits in U.S. Central, which led to indirect exposure to its high concentration of private-label MBS and losses when it failed. For example, in 2007, Members United had invested more than 40 percent of total assets in U.S. Central, and Southwest and Constitution had each invested approximately 30 percent of total assets, according to the MLRs. In addition to poor investment decisions, the business strategies U.S. Central and the other four failed corporates’ pursued contributed to their failure. Specifically, their management implemented business strategies to attract and retain credit union members by offering lower rates on services and higher returns on investments. According to the MLRs, U.S. Central shifted towards an aggressive growth strategy to maintain and increase its market share of corporates. This strategy led its management to increase its holdings of high-yielding investments, including private- label MBS. From 2006 to 2007, U.S. Central’s assets grew by 22 percent as members invested their liquid funds in return for competitive rates. The other failed corporates implemented similar business strategies. The financial crisis exposed the problems in the corporates’ investment and business strategies, leading to a severe liquidity crisis within the credit union system. Specifically, the downturn severely diminished the value and market for private-label MBS and depositors lost confidence in the corporate system because of the institutions’ substantial investment in these securities. The decline in value of these investments resulted in corporates borrowing significant amounts of short-term funds from outside of the credit union system to meet liquidity needs as credit unions reduced their deposits. However, these options became limited when credit rating agencies and lenders lost confidence in individual corporates and some lines of credit were suspended. For example, from 2007 to 2009, credit rating agencies downgraded U.S. Central’s long- and short- term credit ratings, and in 2009, the Federal Reserve Bank of Kansas City downgraded its borrowing ability. Eventually, the deterioration of the underlying credit quality of the private-label MBS led to the corporates’ insolvencies. According to our analysis of NCUA’s and its OIG’s data, the 85 credit union failures were primarily the result of poor management.Management of failed credit unions exposed their institutions to increased operational, credit, liquidity, and concentration risks, which it then failed to properly monitor or mitigate. The following describes these risks and provides examples of how exposure to these risks led to the failure of a number of credit unions. Operational risk includes the risk of loss due to inadequate or failed internal controls, due diligence, and oversight. We found that management’s failure to control operational risk contributed to 76 of the 85 failures. For example, Norlarco Credit Union’s management had weak oversight policies and controls for an out of state construction lending program and failed to perform due diligence before entering into a relationship with a third party responsible for managing it. Norlarco’s management allowed the third party to have complete control in making and overseeing all of the credit union’s residential construction loans, leading to a decline in borrower credit quality and underreported delinquencies. Potential losses from its residential construction loan program led to Norlarco’s insolvency. Management’s failure to control operational risk can also create the potential for fraud. We analyzed NCUA’s and its OIG’s data and found that fraud or alleged fraud at credit unions contributed to 29 of 85 of credit union failures. According to NCUA, credit unions with inadequate internal controls are susceptible to fraud. In addition, NCUA’s internal assessments of fraud showed that their examiners often had cited inactive boards or Supervisory Committees, limited number of staff, and poor record keeping before the fraud was discovered at the failed credit unions. For example, the OIG reported that Certified Federal Credit Union’s internal controls were severely lacking, enabling the chief executive officer to report erroneous financial results to the credit union’s board and in quarterly call reports. According to the MLR, before the fraud was identified, the credit union’s board was weak and unresponsive to repeated reports of inaccurate accounting records and weak internal controls from NCUA examiners and external auditors. The credit union was involuntarily liquidated in 2010. NCUA OIG officials told us that some other indicators of potential fraud are high ratios of investments to assets and a low number of loan delinquencies. Credit risk is the possibility that a borrower will not repay a loan or will default. We found that management’s failure to control for credit risk contributed to 58 of the 85 credit union failures. For example, Clearstar Financial Credit Union management originated and funded a significant number of loans that were poorly underwritten—that is, they were made to borrowers with poor credit histories. Management then compounded these mistakes by extending delinquent loans and poor collection practices, contributing to the credit union’s eventual failure. Moreover, management at some failed credit unions did not consistently monitor the credit risk associated with member business loans (MBL). With some limitations, credit unions can lend to their members for business purposes. However, these loans can be risky for credit unions. For example, NCUA reported in recent congressional testimony that due to the lack of credit union expertise and challenging macroeconomic conditions, over half of the losses sustained by the NCUSIF were related to MBLs for a two year period in the late 1980s. Our analysis of NCUA’s and its OIG’s data indicated that MBLs contributed to 13 of the 85 credit union failures. According to our analysis of historical financial data, failed credit unions had more MBLs as a percentage of assets than peer credit unions that did not fail or the credit union industry (see fig. 4). In addition, more than 40 percent of failed credit unions participated in member business lending. Comparatively, NCUA had testified that only 30 percent of all credit unions participated in member business lending, as of March 31, 2011. Liquidity risk is the risk that the credit union may not be able to meet expenses or cover member withdrawals because of illiquid assets. We found that liquidity risk contributed to 31 of the 85 credit union failures. For example, the management of Ensign Federal Credit Union relied on a $12 million deposit to fund credit union operations. However, when the deposit was withdrawn in 2009, the credit union lacked other funding sources to meet normal member demands and operational expenses, contributing to the credit union’s failure. Concentration risk is excessive exposure to certain markets, industries, or groups. While some level of concentration may not be avoidable, it is the responsibility of management to put in place appropriate controls, policies, and systems to monitor the associated risks. We found that concentration risk contributed to 27 of the 85 credit union failures. For example, High Desert Federal Credit Union’s management began expanding its real estate construction lending in 2003, and by 2006, its loan portfolio had more than doubled from $73 million to $154 million. In 2006, construction lending accounted for more than 60 percent of the credit union’s loan portfolio. When the housing market collapsed, its concentration in the real estate construction loans led to its insolvency. In addition to the management weaknesses in corporates and credit unions, NCUA’s examination and enforcement processes did not result in strong and timely actions to avert the failure of these institutions. The OIG found that stronger and timelier action on the part of NCUA could have reduced losses from the failures from U.S. Central and the four other failed corporates. NCUA examiners had observed the substantial concentration of private-label MBS for U.S Central and three of the four other corporates that failed prior to 2008, but did not take timely action to address these concentrations. For example, NCUA examiners observed Wescorp’s growing concentration in private-label MBS beginning in 2003; but they did not limit or take action to address this issue until 2008. Similarly, the OIG’s material loss review of Southwest Corporate cites that NCUA’s March 2008 exam concluded, “current and allowable MBS exposures are significant given the unprecedented market dislocation… Southwest’s exposure is clearly excessive.” However, the MLR did not indicate that NCUA issued a document of resolution or enforcement action to address Southwest’s high concentration. In the case of Constitution Corporate, the MLR noted that NCUA took enforcement action to address concentration limits prior to failure. Similar to its findings for corporate failures, the OIG found weaknesses in NCUA’s examination and enforcement processes for 10 of the 11 failed credit unions for which it conducted MLRs. In particular, the OIG stated that “if examiners acted more aggressively in their supervision actions, the looming safety and soundness concerns that were present early-on in nearly every failed institution, could have been identified sooner and the eventual losses to the NCUSIF could have been stopped or mitigated.” The OIG made a number of recommendations to address the problems that the financial crisis exposed. For example, to better ensure that corporate credit unions set prudent concentration limits, the OIG recommended that NCUA provide corporate credit unions with more definitive guidance on limiting investment portfolio concentrations. Based on the credit union failures, the OIG recommended that NCUA take steps to strengthen their examinations process by, among other things, improving the review of call reports and third-party relationships, as well as following up credit union actions in response to documents of resolution and the quality control review process for examinations. Appendix I contains more information on the status of NCUA’s implementation of OIG’s recommendations. NCUA took actions to stabilize, resolve, and reform the corporate system and to minimize the costs of its intervention. NCUA based these actions on four guiding principles: to avoid any interruption of services provided by corporate credit unions to credit unions; to prevent a run on corporate shares by maintaining confidence in the overall credit union system; to facilitate a corporate resolution process in line with sound public policy that is at the least possible cost to the credit unions over the long term, while avoiding moral hazard; and to reform the credit union system through new corporate rules with a revised corporate and regulatory structure. NCUA established a number of measures to ensure that corporates had access to liquidity. To resolve the failed corporates, NCUA placed five corporates—U.S. Central, Wescorp, Members United, Southwest, and Constitution—into conservatorship and isolated their nonperforming assets. To reform the system, NCUA enacted new rules to address the causes of the failures, assessed credit unions for corporate losses, forecasted the impact of future assessments through scenario tests, and took measures to reduce moral hazard. Through these actions, NCUA attempted to resolve the corporates’ losses at the least possible cost. However, we could not verify all NCUA’s estimated losses of the corporates’ and credit union failures. To provide liquidity, NCUA used two existing funds—NCUSIF and CLF— and based on legislative changes, created a temporary fund—the Temporary Corporate Credit Union Stabilization Fund (Stabilization Fund). NCUA also created four new programs—the Credit Union System Investment Program (CU-SIP), the Credit Union Homeowners’ Affordability Relief Program (CU-HARP), the Temporary Corporate Credit Union Liquidity Guarantee Program (Liquidity Guarantee Program), and the Temporary Corporate Credit Union Share Guarantee Program (Share Guarantee Program). See appendix III for more information about these programs. NCUA used NCUSIF to provide liquidity to the corporate system. As stated earlier, U.S. Central had experienced substantial losses, impairing its ability to provide liquidity to the credit union system. In December 2008, NCUA provided for a NCUSIF loan to U.S. Central to cover an end- of-year liquidity shortfall. The loan was outstanding for 3 days and then fully repaid. In January 2009, NCUA placed a $1 billion capital note in U.S. Central. NCUSIF subsequently wrote off this note when it determined the credit losses on the private label MBS (held by U.S. Central) impaired the full value of the note. To avoid compromising its borrowing authority with Treasury, NCUA changed the CLF’s investment strategy in mid-2009. Specifically, before 2009, the CLF’s funds from subscribed capital stock and retained earnings placed in a deposit account with U.S. Central, the CLF agent. However, given U.S. Central’s insolvency, NCUA moved its funds out of U.S. Central and invested them with Treasury in 2009, to avoid an adverse accounting treatment for the fund—thereby reducing the fund’s member equity and ultimately limiting its borrowing authority with Treasury. restricted from lending directly to corporates, NCUA then used funds from NCUSIF to lend $5 billion to U.S. Central and $5 billion to Wescorp. By October 2010, U.S. Central and Wescorp had repaid their loans to NCUSIF using funds raised primarily from the sale of more than $10 billion in unencumbered marketable securities that sold near their par value in August and September 2010.$10 billion CLF loan with proceeds from asset sales. In addition, NCUA used a temporary fund created by Congress in 2009 to help increase liquidity in the system. In May 2009, Congress passed the Helping Families Save Their Homes Act, which, among other things, created a temporary fund to absorb losses from corporates. the act created the Stabilization Fund, which replaced NCUSIF as the primary source to absorb the corporates’ losses. The act also amended the Federal Credit Union Act to give NCUA the authority to levy assessments over the life of the Stabilization Fund to repay the corporates’ losses instead of repaying them in a lump sum. In addition, it increased NCUA’s borrowing authority with Treasury up to $6 billion through a revolving loan fund to be shared between the Stabilization Fund and NCUSIF. Amending the Federal Credit Union Act, 12 U.S.C. §§ 1751-1795k. funds to pay down their external debt, freeing up assets that had been posted as collateral against the debt. In exchange for participating in the programs, the corporates were required to pay CLF borrowing costs to credit unions and an additional fee to the credit unions as an incentive for them to participate in the programs. CLF lending to credit unions totaled approximately $8.2 billion under CU-SIP and about $164 million under CU-HARP. All borrowings for both programs were repaid in 2010. Liquidity Guarantee Program and Share Guarantee Program. NCUA created these two temporary guarantee programs in late 2008 and early 2009 to help stabilize confidence and dissuade withdrawals by credit unions, in an attempt to avoid a run on the corporates. These programs provided temporary guarantees on certain new unsecured debt obligations issued by eligible corporates and credit union shares held in corporates in excess of $250,000. Initially, NCUA provided the coverage to all the corporates for a limited time but later provided extensions to continue guaranteeing coverage to corporates that did not opt out of the program. Based on NCUA’s 2009 financial statements, no guarantee payments were required for either program. However, as of December 19, 2011, the audited financial statements for calendar year 2010 of the Stabilization Fund were not completed and available. NCUA took a variety of steps to resolve the failed corporates and maintain corporate payment processing services for credit unions. First, in April 2009, NCUA enacted a temporary waiver to allow corporates not meeting their minimum capital requirements to continue to provide services to credit unions. In particular, the waiver allowed corporates to use their capital levels of record on their November 2008 call reports in order to continue providing the necessary core operational services to credit unions. In addition, it granted the Office of Corporate Credit Unions discretionary authority to modify or restrict the use of this capital waiver for certain corporates based on safety and soundness considerations. Without the waiver, corporates that failed to meet the minimum capital requirements would have had to cease or significantly curtail operations, including payment system services and lending and borrowing activities. As a result, the credit union system would have faced substantial interruptions in its daily operations, potentially leading to a loss of confidence in other parts of the financial system. Second, NCUA ultimately placed the five failing corporates into conservatorship. According to NCUA, it placed the corporates into conservatorships to reduce systemic exposure, exert greater direct control, improve the transparency of financial information, minimize cost, maintain confidence, and continue payment system processing. When placing the five corporates into conservatorship, NCUA replaced the corporates’ existing boards, the chief executive officers, and in some cases, the management teams and took over operations to resolve the corporates in an orderly manner. As a part of the conservatorships, NCUA set up bridge institutions for the wholesale corporate—U.S. Central—and the three other corporates. Through these bridge institutions, NCUA managed the corporates’ illiquid assets and maintained payment services to the member credit unions. The member credit unions must provide sufficient capital to acquire the operations of these bridge institutions from NCUA. Third, NCUA established a securitization program to provide long-term funding for the legacy assets formerly held in the securities portfolios of certain corporate credit unions by issuing NCUA-guaranteed notes. NCUA’s analysis showed that MBS were trading at market prices considerably below the intrinsic value that would eventually be received by long-term investors. NCUA used a method similar to the “good bank-bad bank” model that the Federal Deposit Insurance Corporation has sometimes adopted with insolvent banks to remove illiquid or “bad” assets from the failed corporates. In particular, NCUA transferred the corporates’ assets into Asset Management Estates, also known as liquidation estates. Using these estates, NCUA held and isolated the corporates’ illiquid assets (i.e., MBS) from the bridge institutions and issued the NCUA-guaranteed notes. NCUA issued $28 billion (at the point of securitization) in these NCUA- guaranteed notes, while the face value of the original MBS assets was approximately $50 billion. notes so that its value would approximate the value of the principal and interest cash flows on the underlying legacy assets. NCUA officials said that by structuring the notes in this manner, NCUA minimized its exposure in the event that the underlying cash flow was less than the notes’ value. According to NCUA’s term sheet, cash flows from the underlying securities will be used to make principal and interest payments to holders of the notes, and NCUA guarantees timely payments. NCUA issued 13 separate notes, with the final sales occurring in June 2011 and maturing between 2017 and 2021. Any necessary guarantee payments are to be made from the Stabilization Fund, which also expires in 2021. NCUA structured each of the guaranteed Finally, as of November 2011, NCUA has initiated lawsuits against parties it believes are liable for the corporates’ MBS-related losses. These lawsuits allege violations of federal and state securities laws and misrepresentations in the sale of hundreds of securities, according to NCUA. NCUA relied on external consultants—in addition to its own analysis—to estimate its losses from the failed corporate credit unions. NCUA issued a new rule for corporates to address the key causes of the failures. Among other things, the rule (1) eliminates the definition and separate treatment of the wholesale corporate or third tier of the credit union system, (2) prohibits corporates from investing in certain securities and set sector concentration limits, (3) creates a new system of capital standards and PCA for corporates, and (4) introduces new corporate governance requirements. Some parts of the new rule addresses the recommendations of NCUA’s OIG. NCUA issued the rule on October 20, 2010, and it will be implemented over a number of years. For additional information on the rule, see appendix IV. Essentially eliminate the wholesale corporate or third tier of the credit union system. The new corporate rule that NCUA issued on October 20, 2010, eliminated both the definition of and the requirements applicable to a wholesale corporate or the third tier of the credit union system. NCUA essentially eliminated the wholesale corporate, in part, to mitigate inefficiency and systemic risk in the credit union system. The failure of U.S. Central, the credit union system’s only wholesale corporate, highlights some of the risks. Specifically, its failure contributed to the failure of three corporates, instability in the other corporates, and substantial losses to the Stabilization Fund. Prohibit corporates from certain investments and set sector concentration limits. NCUA amended the corporate rule to prohibit certain investments, such as private-label MBS, and set certain sector In addition to prohibiting private-label MBS, the concentration limits. rule prohibits corporate investments in collateralized-debt obligations, net interest-margin securities, and subordinated securities. Previously, corporates were allowed to set their own sector concentration limits, which enabled them to continually increase their limits or set excessive limits. The new rule sets maximum sector concentration limits for corporate investments and addresses OIG recommendations that NCUA provide corporates with more definitive guidance on limiting investment portfolio concentrations. Corporates are limited to investing less than 1,000 percent of capital or 50 percent of total assets in specific investments, including agency MBS, corporate debt obligations, municipal securities, and government-guaranteed student loan asset-backed securities. Furthermore, corporates are restricted from investing more than 500 percent of capital or 25 percent of total assets in other asset-backed security sectors, including auto loans and leases, private-label student loans, credit card loans, or any sector not explicitly noted in the rules. NCUA has taken additional steps to mitigate the associated risk by limiting the weighted-average life of the portfolio to approximately 2 years. NCUA also tightened the limits on securities purchased from a single obligor from 50 percent of capital to 25 percent. Create a new system of capital standards and PCA for corporates. NCUA’s new corporate rule also established a revised set of capital standards for corporates and PCA framework. The new capital standards replace the existing 4 percent mandatory minimum capital requirement with three minimum capital ratios, including two risk- The risk- based capital ratios and a leverage ratio (see table 2). based capital and interim leverage ratios became enforceable on October 20, 2011, and all corporates were required to meet these capital standards. Starting in October 2011, corporates are also subject to PCA if their capital falls below the adequately capitalized level for any of the three capital ratios. As discussed earlier, a corporate becomes subject to more severe supervisory actions and restrictions on its activities if its capital continues to fall. Introduce new corporate governance requirements. NCUA has instituted a new corporate governance rule. To ensure that corporate board members have adequate knowledge and experience to oversee sophisticated corporate investment and operation strategies, they must hold an executive management position, such as chief executive offer, chief financial officer, or chief operating officer of a credit union. Corporate board members are also prohibited from serving on more than one corporate credit union board. According to NCUA, this restriction will help ensure that board members’ loyalty is undivided and that they are not distracted by competing demands from another corporate. Effective October 21, 2013, the majority of a corporate’s board members must be representatives from member credit unions. The purpose of this rule is to limit another corporate from serving other corporates rather than serving their member credit unions. In addition, the governance rules require disclosure of executive compensation and prohibit “golden parachutes”—lucrative benefits given to executives who are departing their jobs. NCUA’s audited financial statements for NCUSIF reported an allowance for loss of $777.6 million at December 31, 2010. This allowance for loss represents the difference between funds expended to close failed retail credit unions and the amounts NCUA estimates it will recover from the disposition of the failed retail credit unions’ assets. Also, these financial statements reported additional estimated losses of about $1.23 billion as of December 31, 2010, associated with troubled credit unions considered likely to fail. With respect to the Stabilization Fund, the 2010 audited financial statements were not yet final, as of December 19, 2011. NCUA officials cited ongoing challenges in resolving and valuing failed corporate assets as contributing to the delays in finalizing the Stabilization Fund financial statements. We requested documentation adequate to support NCUA’s estimates of losses from corporate failures, but NCUA was not able to provide the documentation we required. The NCUA OIG was provided with the same information that we obtained and told us that they were unable to verify NCUA’s loss estimates. Absent this documentation, it is not possible to determine the full extent of losses resulting from corporate credit union failures. Moreover, without well-documented cost information, NCUA faces questions about its ability to effectively estimate the total costs of the failures and determine whether the credit unions will be able to pay for these losses. Credit unions are responsible for repaying NCUSIF and the Stabilization Fund, and NCUA has begun to assess credit unions for those losses. NCUA borrowed taxpayer funds from Treasury to fund NCUSIF and the Stabilization Fund to provide liquidity to the corporate system and it plans to repay the debt to Treasury with interest by 2021. Since 2009, NCUA has assessed credit unions a total of about $5 billion (about $1.7 billion for NCUSIF and $3.3 billion for the Stabilization Fund). NCUA officials told us that they had analyzed the credit unions’ ability to repay by determining the impact that varying assessment levels would have on the net worth ratios of both individual credit unions and the credit union system. NCUA considers factors such as the number of credit unions that would fall below 2 percent capital or be subject to PCA’s net worth restoration plan. In 2011, NCUA levied a $2 billion assessment for the Stabilization Fund. According to NCUA officials, NCUA determined that the credit union system had enough surplus capital to pay the assessment because of its strong return on assets of 0.86 percent for first three quarters of the year. NCUA determined that the assessment would result in around 811 credit unions having a negative return on assets. NCUA officials also noted that in a typical year about 10 to 20 percent of credit unions have had a negative return on assets. According to NCUA officials, the primary driver for the $2 billion Stabilization Fund assessment in 2011 was interest and principal on maturing medium-term notes that the corporates issued and that were to be repaid by the Stabilization Fund. NCUA officials told us that if they had found that the credit unions could not afford the Stabilization Fund assessment, they would have considered other options, such as issuing additional NCUA- guaranteed notes or unsecured debt. Although NCUA officials have stated that the credit union system will bear the ultimate costs of corporate and credit union failures, risks to the taxpayers remain. However, many of the reforms are ongoing and NCUA continues to resolve the failure of U.S. Central and Wescorp, as will be discussed. Moreover, the ultimate effectiveness of NCUA’s actions and associated costs remain unknown. As a result, whether the credit union system will be able to bear the full costs of the losses or how quickly NCUA will repay Treasury is unknown. Should the credit union system be unable to repay Treasury through NCUA assessments, taxpayers would have to absorb the losses. Moral hazard occurs when a party insulated from risk may behave differently than it would behave if it were fully exposed to the risk. In the context of NCUA’s actions to stabilize the credit union system, moral hazard occurs when market participants expect similar emergency actions in future crises, thereby weakening their incentives to manage risks properly. Furthermore, certain emergency assistance can also create the perception that some institutions are too big to fail. In general, mitigating moral hazard requires taking steps to ensure that any government assistance includes terms that make such assistance an undesirable last resort, except in the direst circumstances, and specifying when the government assistance will end. For example, we previously reported that during the 2007-2009 financial crisis, the federal government attached terms to the financial assistance it provided to financial institutions such as (1) limiting executive compensation, (2) requiring dividends be paid to providers of assistance, and (3) acquiring an ownership interest—all of which were designed to mitigate moral hazard to the extent possible. NCUA designed actions to mitigate moral hazard at various stages of its effort to resolve and reform the corporate credit union system, but the effectiveness of these actions remains to be seen. Examples of the actions designed to mitigate moral hazard include terminating the corporates’ management teams and eliminating their boards, issuing letters of understanding and agreement as a condition to entering the Share Guarantee Program, requiring a guarantee fee under the Liquidity Guarantee Program, requiring credit unions to repay the losses to NCUSIF and the Stabilization Fund, filing lawsuits against responsible parties, and requiring credit unions to disclose executive compensation. In addition, NCUA enhanced market discipline by requiring corporates to obtain capital from their member credit unions to remain in operation. That is, member credit unions decided whether to capitalize new corporates. As of October 30, 2011, the two of the four bridge corporates—Wescorp Bridge and U.S. Central Bridge—had either not succeeded in obtaining sufficient member capital (Wescorp) or had not attempted to do so because of a lack of anticipated demand (U.S. Central). They are both being wound down by NCUA. Credit unions that triggered PCA had mixed results. Our analysis of credit unions that underwent PCA indicates corrective measures that were triggered earlier were generally associated with more favorable outcomes. We observed successful outcomes associated with PCA, but also noted inconsistencies in the presence and timeliness of PCA and other enforcement actions. Furthermore, in most cases, other discretionary enforcement actions to address deteriorating conditions either were not taken or taken only in the final days prior to failure. Other financial indicators could serve to provide an early warning of deteriorating conditions at credit unions. The number of credit unions in PCA significantly increased as the financial crisis unfolded (see fig. 5).June 30, 2011, 560 credit unions triggered PCA. Specifically, of the 560 credit unions that entered PCA from January 1, 2006, through June 30, 2011, the vast majority (452) triggered PCA from January 2008 through June 2011. NCUA has taken steps to stabilize, resolve, and reform the corporate system. Many of the reforms are ongoing and NCUA continues to resolve the failures of U.S. Central and Wescorp. As a result, the ultimate effectiveness of NCUA’s actions and associated costs remain unknown. Moreover, while the 2010 financial statements for NCUSIF are final—and record a loss—the 2010 financial statements for the Stabilization Fund were only recently released at the end of December 2011. Prior to the release of these statements, NCUA had estimated losses for the Stabilization Fund, but NCUA did not provide adequate documentation to allow us to verify the reasonableness and completeness of these estimates. Without well documented cost information, NCUA faces questions about its ability to effectively estimate the total costs of the failures and determine whether the credit unions will be able to pay for these losses. Before the recent financial crisis, PCA was largely untested because the financial condition of the credit unions had been generally strong since PCA was enacted. With the failure of the 85 credit unions, the PCA framework showed some weaknesses when addressing deteriorating credit unions. The main weakness of the PCA framework, as currently constructed in statute, stems primarily from tying mandatory corrective actions to only capital-based indicators. As previously reported, capital- based indicators have weaknesses, notably that they can lag behind other indicators of financial distress. Other alternative financial indicators exist or could be developed to help identify early warning signs of distress, which our analysis shows is a key to successful outcomes. Tying regulatory actions to additional financial indicators could mitigate these weaknesses and increase the consistency with which distressed credit unions would be treated. By considering which additional financial indicators would most reliably serve as an early warning sign of credit union distress—including any potential tradeoffs—and proposing the appropriate changes to Congress, NCUA could take the first steps in improving the effectiveness of PCA. Given that the 2010 financial statements for the Stabilization Fund were not available for our review and NCUA was unable to provide us adequate documentation for their estimates as well as the identified shortcomings of current PCA framework, we recommend that NCUA take the following two actions. 1. To better ensure that NCUA determines accurate losses incurred from January 1, 2008, to June 30, 2011, we recommend that the Chairman of NCUA provide its OIG the necessary supporting documentation to enable the OIG to verify the total losses incurred as soon as practicable. 2. To improve the effectiveness of the PCA framework, we recommend that the Chairman of NCUA consider additional triggers that would require early and forceful regulatory actions, including the indicators identified in this report. In considering these actions, the Chairman should make recommendations to Congress on how to modify PCA for credit unions, and if appropriate, for corporates. We provided a draft of this report to NCUA and its OIG for their review and comment. NCUA provided written comments that are reprinted in appendix V and technical comments that we have incorporated as appropriate. In its written comments, NCUA agreed with our two recommendations. Notably, NCUA stated that it had taken action to implement one of the recommendations by providing OIG with documentation of loss estimates for the Stabilization Fund as of December 31, 2010. It expects to provide additional documentation of loss estimates as of June 30, 2011, in January 2012. In its letter, NCUA also stated that the December 31, 2010, audited financial statements for the Stabilization Fund would be issued in the near future and described reasons for the delay in finalizing this audit. These reasons included the scope and magnitude of the corporate failures and the actions that NCUA had undertaken to resolve the corporate failures and strengthen its financial reporting systems. While NCUA acknowledged that some of the loss estimates were not finalized at the time of our audit, including the 2010 financial statements, it noted that the results from the valuation experts were complete and available. Our report recognizes the challenges that NCUA has faced in finalizing its financial statements and describes the actions that it has taken to stabilize, resolve, and reform the credit union system. However, as we reported, NCUA was unable to provide us with the documentation that we required to verify the reasonableness and completeness of the loss estimates for the Stabilization Fund. Subsequently, the NCUA 2010 Financial Statement Audit for Temporary Corporate Credit Union Stabilization Fund was released on December 27, 2011. Although NCUA has said that its analysis shows that the credit union system has the capacity to pay for the loss estimates, we continue to believe that without well-documented cost information, NCUA faces questions about its ability to effectively estimate the total costs of the failures and determine whether the credit unions will be able to pay for these losses. Taking the steps to address our recommendation will help NCUA address these questions. In its written comments, NCUA also described its commitment to continued research and analysis to improve the effectiveness of PCA. In particular, NCUA cited its membership on the Federal Financial Institutions Examination Council and the Financial Stability Oversight Council. NCUA also noted that it was following developments related to the federal banking agencies’ consideration of enhancements to PCA triggers, a step that we recommended in our report Banking Regulation: Modified Prompt Corrective Action Framework Would Improve Effectiveness. NCUA agreed with the recommendation to consider other triggers for PCA but noted that some of the potential financial indicators that we identified could have drawbacks. We also acknowledged in the report that multiple indicators of financial health could be used as early warning indicators and that the extent to which the financial indicators we identified could serve as strong early warning indicators might vary. Furthermore, using some of these indicators as early warning signs of distress could present different advantages and disadvantages—all of which would need to be considered. Nevertheless, we continue to believe that considering a range of potential indicators, including those identified in the report, is a necessary and important step in improving the effectiveness of PCA. NCUA’s letter also noted a potential “misconception” in the report and said that it recognized the need for timelier use of formal enforcement action, as evidenced in its response to OIG findings and recommendations. However, NCUA stated that nearly all failed credit unions received an enforceable regulatory action prior to failure, either through PCA or non-PCA authorities. In some cases, the failures occurred so abruptly that NCUA did not have a long lead time to take action. NCUA also stated that it had a strong record of employing PCA actions when credit unions tripped PCA triggers, as PCA actions are often more expedient forms of enforceable regulatory action. As discussed in the report, successful outcomes were associated with PCA in some cases. However, we also found inconsistencies in the presence and timeliness of PCA and other enforcement actions. Furthermore, we also found that other discretionary enforcement actions to address deteriorating conditions either were not taken or were taken only in the final days before the failure. Finally, the letter concluded that credit unions performed well during the recent financial crisis and that NCUA had successfully mitigated the failures that did occur. Our report describes the scope and magnitude of failures among corporates and credit unions and also notes that the 85 credit unions represented less than 1 percent of credit union assets as of 2008. Finally, we also described actions NCUA had taken to stabilize the credit union system, but we note that NCUA’s examination and enforcement processes did not result in strong and timely actions to avert the failure of these institutions. We are sending copies of this report to NCUA, the Treasury, and the Financial Stability Oversight Council, and other interested parties. The report is also available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact A. Nicole Clowers at (202) 512-8678 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made major contributions to this report are listed in appendix V. During the period of November 2008 to October 2011, the National Credit Union Administration’s (NCUA) Office of Inspector General (OIG) made 25 recommendations to NCUA to improve both corporate and credit union supervision, operations and financial reporting. Six of the 25 recommendations were for corporates and 19 were for credit unions. NCUA has fully implemented 6 of the 25 recommendations relating to improving the corporate structure, corporate governance, examination processes, and call report data, as well as providing guidance on concentration risk. In addition, they have partially implemented another 10 recommendations—2 of these relate to corporate risk management and corporate examiner training. The other 8 partially implemented recommendations are related to improving the credit union examination process and financial monitoring of credit unions on areas such as fast growing and new business programs, third-party relationships, concentration risk, and ensuring credit union’s take appropriate action to respond to documents of resolution (DOR). Finally, NCUA has not yet implemented another 9 recommendations—6 of these recommendations are related to improving examination processes for credit unions with more than $100 million in assets, internal controls and documenting call report analysis. The remaining 3 recommendations that were not implemented relate to improving follow-up procedures for DORs. Furthermore, OIG officials have told us that 13 of the 19 partially or not implemented recommendations will likely be fulfilled with the issuance of the revised National Supervision Policy Manual (NSPM) in 2012. OIG officials have reviewed the draft revised NSPM and determined that it addresses their recommendations. Table 3 provides a summary of these recommendations and their status based on our evaluation of the information that NCUA and its OIG provided. Legislation enacted in January 2011 requires us to examine NCUA’s supervision of the credit union system and the use of PCA. This report examines (1) what is known about the causes of failures among corporates and credit unions since 2008; (2) the steps that NCUA has taken to resolve these failures and the extent to which its actions were designed to protect taxpayers, avoid moral hazard, and minimize the cost of corporate resolutions; and (3) NCUA’s use of PCA and other enforcement actions. In addition, we reviewed NCUA’s implementation of its OIG recommendations. (See app. I.) To identify the causes of failures among corporates and credit unions, we obtained and analyzed NCUA documents, including Material Loss Reviews (MLR), postmortem reports, Board Action Memorandums (BAM), and other relevant documents. To corroborate this information, we also assessed the asset size and investment concentrations for all failed and nonfailed corporates by conducting analyses of data from SNL Financial—a financial institution database—on corporates’ investment portfolios from January 2003 to September 2010. We obtained and analyzed NCUA data related to conservatorships and resolution actions taken from January 2008 to June 2011 to determine the number and causes of corporates’ and credit union failures. We further assessed credit union member business loan participation as a percentage of total loans for both failed and their peer credit unions that did not fail from December 2005 to January 2011. To identify credit union failures related to fraud, we reviewed data, analyzed reports and documents by NCUA and its OIG on each of the failed credit unions from January 2008 to June 2011. To determine loss data from the corporates’ and credit union failures, we reviewed NCUA’s 2008, 2009, and 2010 annual reports; MLRs; BAMs; and NCUA data on losses to National Credit Union Share Insurance Fund (NCUSIF) and the Temporary Corporate Credit Union Stabilization Fund (Stabilization Fund). We interviewed NCUA’s OIG, Office of Corporate Credit Unions, Office of Capital Markets, Chief Financial Officer, and Office of Examination and Insurance to obtain their perspectives on the causes of the corporates’ and credit union failures. We further met with credit union industry associations to obtain their views on NCUA’s efforts to reform the corporate credit union system. We assessed the reliability of the SNL and NCUA data used for this analysis and determined that these data were sufficiently reliable for our purposes. To assess the steps that NCUA has taken to stabilize, resolve, and reform the corporate and credit union system, we reviewed NCUA documents and data including BAMs; MLRs; NCUA annual reports from 2008, 2009, and 2010; audited financial statements; NCUA’s Corporate Stabilization and Resolution Plan; and NCUA-commissioned reports; in addition to testimonies at relevant congressional hearings and planning documents. To determine actions taken to reform the corporate system, we reviewed NCUA’s proposed and final rules and interviewed NCUA’s General Counsel to discuss the potential impact of these rules and their effective dates. To determine NCUA’s assessments for credit unions’ and their ability to repay, we reviewed BAMs, NCUA’s scenario analyses for its credit union assessments and loss estimates, and interviewed NCUA officials. We requested detailed information on NCUA’s loss estimates for NCUSIF and Stabilization Fund; NCUA provided some information but it was not sufficient for us to determine the reasonableness and completeness of these estimates. To determine the steps that NCUA took to reduce moral hazard, we compared the actions taken to stabilize, resolve and reform the credit union system to principles cited in our past work on providing federal financial assistance. To assess the outcomes of PCA, we reviewed the outcomes of credit unions as a whole that were subject to PCA from January 1, 2006, through June 30, 2011. Additionally, we tracked a group of credit unions that were subject to PCA from January 1, 2008, through June 30, 2009, during the 2007-2009 financial crisis to identify those credit unions that (1) failed, (2) survived and remained in PCA, and (3) survived and exited PCA. To determine the actions that NCUA took to address deteriorating credit unions, we reviewed regulatory information that included CAMEL ratings, enforcement action data, and PCA-related activities over a 2 year period prior to each credit union failure from January 1, 2008, through June 30, 2011. Specifically, we analyzed the instances and dates of CAMEL downgrades, enforcement actions taken, and PCA-related actions to determine whether and when actions were taken. To assess the utility of various financial indicators in detecting credit unions’ distress, we reviewed the OIG’s MLRs, NCUA’s postmortem studies, and our previous work on PCA. credit unions that did not fail to assess their performance on numerous financial indicators, such as return on assets, operating expenses and liquid assets as an early warning of financial distress. We also compared the failed credit unions and their peers to credit union industry averages across the same period. In considering other indicators for detecting early distress in credit unions, we reviewed data from regulatory filings from the fourth quarter of 2005 through the first quarter of 2011 for three groups: (1) the 85 credit unions that failed from January 2008 to June 2011; (2) a group of 340 peer credit unions—the four closest credit unions in terms of total assets within the state as each failed credit union; and (3) all credit unions that reported their financial condition in a regulatory filing for each quarter within the period. To compare the performance of these three groups, we chose a range of indicators from the CAMEL rating that demonstrates asset quality (A), management (M), earnings (E), and liquidity (L). For assessing asset quality, we also looked at credit unions’ risk exposure and credit performance.data from SNL Financial. GAO-11-612. We assessed the reliability of the SNL Financial database and NCUA’s enforcement data used in our analyses, and found these data to be sufficiently reliable for our purposes. To determine the status of NCUA’s implementation of OIG recommendations, we reviewed the OIG’s corporate and credit union MLRs and their recommendation tracking documents and interviewed NCUA and NCUA’s OIG officials. We conducted this performance audit from May 2011 to December 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. To help stabilize the credit union system, NCUA created four new programs to provide liquidity to corporates. NCUA initiated two of these new programs, the Credit Union System Investment Program (CU-SIP) and the Credit Union Homeowners’ Affordability Relief Program (CU- HARP) in early 2009. Due to the restriction preventing the Central Liquidity Facility (CLF) from lending directly to the corporate credit unions, NCUA designed both programs, CU-SIP and CU-HARP, so that the CLF would lend to the credit unions, which agreed that they in turn would invest in NCUA-guaranteed notes issued by corporates. Starting in January 2009, corporates were required to use the invested funds to pay down their external secured debt. Money from the corporates’ debt issuances were used to free up collateral and to pay back loans made by the credit unions. In exchange for participating in the programs, the corporates were required to pay CLF borrowing costs to credit unions and an additional fee to the credit unions as an incentive for them to participate in the programs. CLF lending to credit unions totaled approximately $8.2 billion under CU-SIP and about $164 million under CU-HARP. All borrowings for both programs were repaid in 2010. CU-SIP. Credit unions received a 25-basis-point spread over the cost of borrowing from the CLF for investing in 1-year CU-SIP note issued by participating corporate credit unions. Lending from the CLF for the CU-SIP started in January 2009 and ended in March 2009, totaling approximately $8.2 billion. All borrowings were repaid by the credit unions to the CLF by the respective months in 2010 (see fig. 12). CU-HARP. This 2-year program was designed to assist struggling homeowners by temporarily facilitating modifications to their monthly mortgage payments. Credit unions invested in CU-HARP Notes from participating corporates. These notes had 1-year maturities and the option to extend the date of maturity for an additional year. The extension of the program’s 1-year maturity depended on the credit union’s continued good standing and available CLF funding. The CLF lent approximately $164 million to credit unions under the CU-HARP. All remaining notes under the program matured in December 2010 and the credit unions repaid all borrowings. The corporates paid a bonus to the credit unions, which was tied to a 50 percent reduction relief in mortgage payments to homeowners. According to NCUA,CU- HARP was not very successful as the program’s design for credit unions to earn the bonus was complex and the time frame in which to apply was limited (see fig. 13). NCUA created two temporary guarantee programs in late 2008 and early 2009 called the Temporary Corporate Credit Union Liquidity Guarantee Program (Liquidity Guarantee Program) and Temporary Corporate Credit Union Share Guarantee Program (Share Guarantee Program) to help stabilize confidence and dissuade withdrawals by credit unions, in an attempt to avoid a run on the corporates by member credit unions. These programs provided temporary guarantees on unsecured offerings by corporates and shares of credit unions held by corporates in excess of $250,000. NCUA originally included all corporates under both guarantee programs for a limited time after signing a letter of understanding and agreement limiting activities and compensation. It later extended the programs to corporates chose not to opt out of the programs. Liquidity Guarantee Program. NCUA guaranteed the timely payment of The program’s principal and interest of all corporates’ unsecured debt. debt issuance deadline was September 2011, with debt maturing no later than June 2017. However, the program was later revised so that any unsecured debt issued after June 2010 would mature no later than September 2012. NCUA stated that this revision was necessary to focus on short-term liquidity needs and bring the program’s deadline in line with its other stabilization efforts (see fig. 14). Share Guarantee Program. This program largely mirrors the Liquidity Guarantee Program. That is, NCUA guaranteed credit union shares in excess of $250,000 through February 2009, with the option of continuing participation in the program through December 2010. NCUA revised the program in May 2009 to extend the program’s deadline to December 2012 and shortened the length of the program’s coverage to shares with maturities of 2 years or less (see fig. 15). In mid-2009, NCUA transferred obligations from both the Liquidity Guarantee and Share Guarantee programs to the Stabilization Fund to limit NCUSIF’s losses stemming from any future corporate losses. According to NCUA officials, NCUSIF was obligated to provide for any guarantee payments that might arise from either the Liquidity Guarantee Program or the Share Guarantee Program. Based on NCUA’s 2009 financial statements, no guarantee payments were required for either program; however, as of December 19, 2011, audited 2010 financial statements for the Stabilization Fund were not available. On September 24, 2010, the NCUA Board adopted comprehensive new rules to govern corporates. Following its initial publication the final rule, the corporate rules underwent several technical corrections and five additions to the corporate rule were published on April 29, 2011. The corporate rule affect several parts of title 12 of the Code of Federal Regulations but is codified primarily in 12 C.F.R. Part 704. This table provides an overview of the corporate rule as initially published in October 2010 and later amended in April 2011. It summarizes the major provisions at a general level and gives references to where more detailed explanations can be found in the preambles of the October 2010 and April 2011 final rulemakings. The preambles describe in considerable detail the rationales for the provisions, section-by-section analyses of each provision, what NCUA initially proposed, the comments it received and its response to them, and how the final provisions differ from those originally proposed. In addition to the contacts named above, Debra R. Johnson, Assistant Director; Emily R. Chalmers; Gary P. Chupka; Nima Patel Edwards; Debra Hoffman; Barry A. Kirby; Colleen A. Moffatt; Timothy C. Mooney; Robert A. Rieke; and Gregory J. Ziombra made significant contributions to this report. Other contributors included Pamela R. Davidson, Michael E. Hoffman, Grant M. Mallie, Jessica M. Sandler, and Henry Wray. | Corporate credit unions (corporates)financial institutions that provide liquidity and other services to the more than 7,400 federally insured credit unionsexperienced billions in financial losses since the financial crisis began in 2007, contributing to failures throughout the credit union system and losses to the National Credit Union Share Insurance Fund (NCUSIF). Since 1998, Congress has required the National Credit Union Administration (NCUA), the federal regulator of the credit union system, to take prompt corrective action (PCA) to identify and address the financial deterioration of federally insured natural person credit unions (credit unions) and minimize potential losses to the NCUSIF. Legislation enacted in 2011 requires GAO to examine NCUAs supervision of the credit union system and use of PCA. This report examines (1) the failures of corporate and credit unions since 2008, (2) NCUAs response to the failures, and (3) the effectiveness of NCUAs use of PCA. To do this work, GAO analyzed agency and industry financial data and material loss reviews, reviewed regulations, and interviewed agency officials and trade organizations. From January 1, 2008, through June 30, 2011, 5 corporates and 85 credit unions failed. As of January 1, 2008, the 5 failed corporates were some of the largestaccounting for 75 percent of all corporate assetsbut the 85 failed credit unions were relatively smallaccounting for less than 1 percent of total credit union assets. GAO found poor investment and business strategies contributed to the corporate failures. Specifically, the failed corporates over concentrated their investments in private-label, mortgage-backed securities (MBS) and invested substantially more in private-label MBS than corporates that did not fail. GAO also found that poor management was the primary reason the 85 credit unions failed. In addition, NCUAs Office of Inspector General has reported that NCUAs examination and enforcement processes did not result in strong and timely actions to avert the failure of these institutions NCUA took multiple actions to stabilize, resolve, and reform the corporate system. NCUA used existing funding sources, such as the NCUSIF, and new funding sources, including the Temporary Corporate Credit Union Stabilization Fund (Stabilization Fund), to stabilize and provide liquidity to the corporates. NCUA placed the failing corporates into conservatorship and liquidated certain poor performing assets. In order to decrease losses from the corporates failures, NCUA established a securitization program to provide long-term funding for assets formerly held in the portfolios of failed corporates by issuing NCUA guaranteed notes. To address weaknesses highlighted by the crisis, in 2010, NCUA issued regulations to prohibit investment in private-label MBS, established a PCA framework for corporates, and introduced new governance provisions. NCUA considered credit unions ability to repay borrowings from Treasury and included measures to reduce moral hazard, minimize the cost of resolving the corporates, and protect taxpayers. While NCUA has estimated the losses to the Stabilization Fund, it could not provide adequate documentation to allow NCUAs Office of Inspector General or GAO to verify their completeness and reasonableness. Without well-documented cost information, NCUA faces questions about its ability to effectively estimate the total costs of the failures and determine whether the credit unions will be able to pay for these losses. GAOs analysis of PCA and other NCUA enforcement actions highlights opportunities for improvement. For credit unions subject to PCA, GAO found those credit unions that did not fail were more likely subject to earlier PCA actionthat is, before their capital levels deteriorated to the significantly or critically undercapitalized levelsthan failed credit unions. GAO also found that for many of the failed credit unions, other enforcement actions were initiated either too late or not at all. GAO has previously noted that the effectiveness of PCA for banks is limited because of its reliance on capital, which can lag behind other indicators of financial health. GAO examined other potential financial indicators for credit unions, including measures of asset quality and liquidity, and found a number of indicators that could provide early warning of credit union distress. Incorporating such indicators into the PCA framework could improve its effectiveness. NCUA should (1) provide its Office Inspector General the necessary documentation to verify loss estimates and (2) consider additional triggers for PCA that would require early and forceful regulatory action and make recommendations to Congress on how to modify PCA, as appropriate. NCUA agreed with both recommendations. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
As Southeast Asian countries, Indonesia and Vietnam are in a region of growing economic power. ASEAN, to which both countries belong, is seeking to form an economic community by the end of 2015 that would deepen economic integration among the 10 ASEAN member states (see fig. 1). World Bank data show that from 2000 through 2014, the collective real gross domestic product (GDP) of ASEAN countries increased by approximately 98 percent. According to International Monetary Fund (IMF) data, if the ASEAN countries were a single nation, their collective GDP in 2014 would represent the seventh-largest economy in the world. ASEAN countries are also important strategically, in part because they are located astride key sea lanes between the Persian Gulf and the economic centers of East Asia. On the basis of a 2011 United Nations (UN) Conference on Trade and Development Review Maritime Transport, the U.S. Department of Energy estimated that more than half of the world’s annual merchant fleet tonnage passed through the South China Sea, which is bordered by Indonesia and Vietnam. According to data from the World Bank, Indonesia’s real GDP increased by around 108 percent from 2000 to 2014. However, the World Bank estimated that in 2011, 16 percent of Indonesians lived below the poverty line of $1.25 per day. Indonesia is the world’s fourth-largest country by population. The United States established diplomatic relations with Indonesia in 1949, after Indonesia gained independence from the Netherlands. According to State, Indonesia’s democratization and reform process since 1998 has increased its stability and security and resulted in strengthened U.S.- Indonesia relations. In 2010, the United States and Indonesia officially launched the United States–Indonesia Comprehensive Partnership to broaden, deepen, and elevate bilateral relations between the two countries on a variety of issues, including economic and development cooperation. However, according to U.S. agencies, the U.S.-Indonesia bilateral relationship continues to face significant challenges because of Indonesia’s implementation of protectionist laws, limited infrastructure, and unevenly applied legal structure. U.S. agencies’ stated goals for Indonesia include supporting the facilitation of U.S. trade and investment between the two countries. The U.S. Embassy in Indonesia is located in Jakarta, with U.S. consulates in Surabaya and Medan and a U.S. consular agency in Bali. China and Indonesia have a long-standing history of trade and interchange. The two countries established diplomatic relations in 1950, 5 years after Indonesia gained independence from the Netherlands. Relations between China and Indonesia were suspended in 1967, after the Indonesian government suspected China of complicity in planning a 1965 coup, but were restored in 1990. Since then, trade and economic relations between the two countries have grown rapidly and in 2013, both countries agreed to elevate bilateral relations to a comprehensive strategic partnership. The partnership seeks to strengthen cooperation in several key areas, including trade, investment, and economic development. In 2015, the countries reaffirmed their support of the partnership and agreed, among other things, to expand market access and two-way investment for firms and to deepen their infrastructure and industrial cooperation. In April 2015, the Presidents of China and Indonesia released a statement setting a bilateral trade target of $150 billion by 2020—an increase of $70 billion from the 2015 target of $80 billion. The two Presidents stated that they will work toward the reduction of tariff and nontariff trade barriers and increase the frequency of trade missions between the two countries. China maintains an embassy in Jakarta and consulates in Medan, Surabaya, and Denpasar. Vietnam has experienced rapid economic growth in the past 15 years, primarily because of economic reforms it began implementing in the late 1980s that transformed it from a centrally planned economy to a type of socialist market economy. Data from the World Bank show that Vietnam’s real GDP increased by around 137 percent from 2000 to 2014. Vietnam has also made great progress in reducing poverty since the 1990s, according to the World Bank. In 2012, the World Bank reported that about 2 percent of Vietnamese lived below the poverty line of $1.25 per day. The United States established diplomatic relations with Vietnam in 1950, after Vietnam achieved limited independence from France. The United States and Vietnam suspended diplomatic relations at the end of the Vietnam War in 1975 but restored them in 1995. Since then, common strategic and economic interests have led Vietnam and the United States to improve relations across a wide range of issues. In 2006, Congress passed a comprehensive trade and tax bill that granted Vietnam permanent normal trade relations. In July 2013, the United States and Vietnam established the United States–Vietnam Comprehensive Partnership, an overarching framework for advancing the bilateral relationship in areas such as economic engagement. In October 2014, the United States relaxed an arms embargo, which it had imposed on Vietnam in 1984, to permit Vietnamese acquisition of maritime military materiel. However, the United States continues to express concerns about Vietnam’s human rights record and designates Vietnam as a nonmarket economy in antidumping procedures. Vietnam has expressed opposition to aspects of U.S. trade policy, including U.S. restrictions on its export of catfish into the U.S. market. U.S. agencies’ stated goals for Vietnam include supporting Vietnam’s economic governance. The U.S. Embassy in Vietnam is located in Hanoi, and the U.S. Consulate General is in Ho Chi Minh City. For centuries, China and Vietnam have had a turbulent relationship that continues to be affected by long-standing territorial disputes in the South China Sea. China has claimed sovereignty over the South China Sea, illustrating its claims by marking its maps with a “nine dash line” that overlaps with Vietnamese claims and encircles most of the South China Sea, including the Paracels and Spratlys. During the Vietnam War, China served as a close ally of the North Vietnamese. In 1974, shortly before the war ended, China seized control of the Paracel Islands from the South Vietnamese. After the war, underlying tensions between the two countries surfaced and China-Vietnam relations deteriorated. China opposed Vietnam’s invasion of Cambodia in 1978, and following a series of disputes, the Chinese army crossed the Vietnamese border in February 1979 and fought a 2-week battle before the Chinese withdrew. In 1991, China and Vietnam renormalized relations. Since then, China and Vietnam have established close economic relations. In 2008, the two countries agreed to establish a comprehensive strategic partnership that enhanced cooperation in multiple areas, such as trade and investment. However, in May 2014, tensions were reawakened when China placed an oil rig near the disputed Paracel Islands, sparking widespread protests in Vietnam; some of these protests turned violent and included attacks on Chinese and Taiwanese individuals and firms. Despite continuing tensions, in April 2015, the leaders of both countries pledged to strengthen their partnership, for example, by increasing cooperation on infrastructure development. China maintains an embassy in Hanoi and a consulate in Ho Chi Minh City. The value of China’s total trade in goods with Indonesia surpassed the United States’ in 2005 and was more than double the United States’ in 2014, when Chinese imports and exports both exceeded U.S. imports and exports. The United States and China are Indonesia’s fifth and second-largest trading partners, respectively, while other ASEAN countries collectively represent Indonesia’s largest trading partner. Available data on U.S. and Chinese FDI, although limited, indicate that U.S. FDI greatly exceeded Chinese FDI in Indonesia from 2007 through 2012. However, Chinese FDI has significantly increased since 2010 and nearly reached U.S. levels of FDI in 2012. The value of China’s total trade in goods with Indonesia surpassed the United States’ in 2005 and was more than double the United States’ total trade in goods—$64 billion versus $28 billion, respectively—in 2014 (see fig. 2). China’s total goods trade in Indonesia increased in nominal terms every year after 2001 except 2008 and 2009, when the global economic crisis occurred, and 2013 and 2014, when Chinese imports of minerals from Indonesia declined. From 1994 through 2014, China’s total trade in goods in Indonesia grew much more rapidly than U.S. total trade in goods, with a slight decline in 2014. As figure 2 illustrates, from 1994 through 2014, China’s imports from, and exports to, Indonesia grew to exceed the United States’. Moreover, while the United States had a nearly continuous annual trade deficit with Indonesia during this period, China had an increasing trade surplus almost every year after 2007. Chinese imports from Indonesia surpassed U.S. imports from Indonesia in 2009 and increased significantly in 2010 and 2011. However, in 2013 and 2014, Chinese imports declined sharply, primarily because of a significant decrease in Chinese imports of minerals and slowing economic growth in China, according to an IMF report. The IMF report stated that in 2014, Indonesia implemented a ban of Indonesia’s raw mineral ore exports, requiring all raw mineral ores to be processed in Indonesia to increase domestic value added. Chinese exports to Indonesia surpassed U.S. exports in 2000 and continued to grow through 2014. The United States had a trade deficit with Indonesia every year from 1994 through 2014, with the deficit growing from $4.2 billion in 1994 to $11.1 billion in 2014. China had a trade deficit with Indonesia every year from 1994 through 2006 but, with the exception of 2011, had a trade surplus every year from 2007 through 2014. China’s trade surplus increased dramatically from 2012 through 2014, from $2.3 billion to $14.6 billion. From 2000 through 2014, the composition of U.S. and Chinese trade in goods with Indonesia remained relatively stable, except for a significant overall increase in China’s mineral imports that peaked in 2013. In 2014, textiles represented the largest share of U.S. imports (26 percent) while minerals represented the largest share of Chinese imports (42 percent). Animals, plants, and food represented the largest share of U.S. exports in 2014 (32 percent), and machinery represented the largest share of Chinese exports (33 percent). Most of China’s, and almost half of the United States’, trade in goods with Indonesia in 2014 consisted of goods for industrial use (i.e., goods, such as rubber and coal, used in the production of other goods). See appendix II for more information about the composition and use of U.S. and Chinese trade in goods with Indonesia. In 2013, other ASEAN countries collectively represented Indonesia’s largest trading partner in total trade in goods, followed by China, Japan, the European Union (EU), and the United States. Exports. Indonesia exported $16 billion in goods to the United States, its fifth-largest export market, and $23 billion in goods to China, its third-largest export market, in 2013. Other ASEAN countries, Japan, and the EU represented Indonesia’s first, second, and fourth-largest goods export markets, respectively. The United States’ share of total Indonesian goods exports decreased from 12.1 percent in 2003 to 8.6 percent in 2013, while China’s share of total Indonesian goods exports increased from 6.2 percent to 12.4 percent during the same period. Imports. Indonesia imported $9 billion in goods from the United States, its sixth-largest import market, and $30 billion in goods from China, its second-largest import market, in 2013. Other ASEAN countries, Japan, the EU, and South Korea represented Indonesia’s first-, third-, fourth-, and fifth-largest goods import markets, respectively. The United States’ share of total Indonesian goods imports decreased from 8.3 percent in 2003 to 4.9 percent in 2013. China’s share of total Indonesian goods exports increased from 9.1 percent in 2003 to 16 percent in 2013. Figure 3 shows Indonesia’s exports and imports in 2003, 2008, and 2013, by trading partner. Indonesia ranks higher as an export and import partner of China than of the United States. Indonesia is China’s 15th-largest export market and the United States’ 34th-largest by value. In 2014, China exported $39.1 billion in goods to Indonesia, or 1.7 percent of global Chinese goods exports. In the same year, the United States exported $8.3 billion in goods to Indonesia—0.5 percent of global U.S. goods exports. Indonesia is China’s 20th-largest source of imported goods and the United States’ 24th-largest by value. In 2014, China imported $24.5 billion in goods from Indonesia, or 1 percent of global Chinese goods imports. In the same year, the United States imported $19.4 billion in goods from Indonesia—0.8 percent of global U.S. goods imports. The United States’ role relative to China’s in Indonesia’s trade of goods as well as services may be greater when the amount of intermediate U.S. inputs to the traded goods and services is taken into account. Because of the nature of global supply chains, for example, a consumer phone from a U.S. company might be assembled in China but includes components manufactured by Germany, Japan, South Korea, and other countries. Data from the UN Commodity Trade database, which counts the full value of the export only for the exporting country, showed that in 2011, China exported $29.2 billion in goods to Indonesia, almost four times the $7.4 billion in goods that the United States exported to Indonesia. However, data from the Organisation of Economic Co-operation and Development (OECD) and the World Trade Organization (WTO), which attempt to account for value added to a finished export by each contributing country, show that China’s exports of value-added goods and services to Indonesia were around 1.8 times those of the United States. The OECD- WTO data suggest that Chinese exports to Indonesia contained a higher portion of components produced elsewhere than U.S. exports contained. Available data from the U.S. Bureau of Economic Analysis (BEA) indicate that U.S. trade in services with Indonesia totaled approximately $2.9 billion in 2013. The United States exported $2.2 billion in services to Indonesia in 2013, with travel and business services, respectively, as the largest and second-largest categories by value, and imported $692 million in services from Indonesia in 2013, with travel and business services, respectively, as the largest and second-largest categories by value. In 2013, total U.S.-Indonesian services trade represented 10 percent of the value of U.S.-Indonesian goods trade. China does not publish data on its trade in services with Indonesia. Data on FDI in Indonesia from the United States and China have limitations, in that these data may not accurately reflect the countries to which U.S. and Chinese FDI ultimately flows. For example, U.S. and Chinese data on FDI in Indonesia do not reflect investments by subsidiaries that U.S. and Chinese firms may set up in other countries and use to make investments in Indonesia. Conversely, U.S. and Chinese firms may set up subsidiaries in Indonesia that can be used to make investments in other countries. Given these limitations, available data show that U.S. FDI flows to Indonesia in 2007 through 2012 totaled about $10.2 billion, exceeding China’s reported FDI flows of about $2.7 billion. However, annual Chinese FDI flows increased significantly during this time, from $100 million in 2007 to $1.4 billion in 2012 in nominal terms (see fig. 4). According to BEA, over 90 percent of total U.S. FDI flows to Indonesia in 2007 through 2012 were concentrated in holding companies and mining. Data on U.S. and Chinese goods exports to Indonesia indicate that from 2006 through 2014, U.S. exports of goods to Indonesia were more similar to Japanese and EU exports than to Chinese exports, suggesting that the United States is more likely to compete directly with Japan and EU countries than with China. Figure 5 presents a commonly used index for assessing the similarity of the United States’ goods exports to Indonesia to those of China and other countries. Data from Commerce’s Advocacy Center, the World Bank, and ADB provide some information about Indonesian government contracts that U.S. and Chinese firms competed for or won. Although these data represent a small share of U.S. and Chinese economic activity in Indonesia, they offer insights into the degree of competition between U.S. and Chinese firms for the projects represented. These data indicate that U.S. firms in Indonesia have competed more often with firms from other countries than with Chinese firms and have tended to win contracts in different sectors. Commerce Advocacy Center. Data from Commerce’s Advocacy Center show that U.S. firms that the center supported in fiscal years 2009 through 2014 competed for Indonesian government contracts most often, and for highest total contract value, with French firms, followed by Chinese firms and firms from other countries (see table 1). According to the center’s data, Chinese firms competed with the U.S. firms for 8 of 32 contracts covering a range of sectors, including energy and power; defense; transportation; telecommunications; and computers, information technology, and security. The 8 contracts for which Chinese firms competed had a total value of $3.6 billion—34 percent of the $10.4 billion in total contract value for which the U.S. firms competed. In contrast, French firms competed against U.S. firms for 11 contracts with a total value of about $8.3 million. World Bank. From 2000 through 2014, U.S. and Chinese firms won a relatively small share of World Bank-financed contracts in Indonesia and tended to win contracts in different sectors. U.S. and Chinese firms won a combined $33 million (1.1 percent) of the $2.94 billion in total contract dollars that the World Bank awarded in Indonesia. Of the $26 million that U.S. firms won, $24 million (94 percent) was for consultant services and the remainder was for goods. In contrast, of the $7 million contract dollars that Chinese firms won, $6.9 million (96 percent) was for goods. Indonesian firms won $2.54 billion (86 percent) of the World Bank’s total contract dollars, while Japanese, French, Korean, and Australian firms won a combined $267 million (9 percent). ADB. U.S. firms won a small share of ADB contracts in Indonesia in 2013 and 2014, while Chinese firms won no ADB contracts. During this period, U.S. firms won three ADB contracts for a combined $10 million of the $410 million in total contract dollars that ADB awarded in Indonesia. One of the three contracts was for a geothermal power project, and the other two were consulting contracts worth less than $0.5 million each. U.S. agencies and private sector representatives have cited multiple challenges to trading and investing in Indonesia. Restrictive regulatory environment. According to officials from the Office of the U.S. Trade Representative (USTR), Indonesia’s regulatory environment constitutes the biggest market access barrier for U.S. firms. In 2014 and 2015, USTR reported that Indonesia’s trade and investment climate was characterized by, among other things, growing protectionism toward local business interests. According to the USTR reports, in recent years, Indonesia has enacted numerous regulations on imports, such as those relating to local content and domestic manufacturing requirements, which have increased the burden for U.S. exporters. In 2013, the United States initiated a WTO dispute settlement process with Indonesia because of Indonesia’s import licensing restrictions on horticulture and meat products.A representative of one U.S. firm whom we spoke with in Indonesia said that the firm had stopped importing soybeans into Indonesia for about a year because of Indonesian quotas, rising import taxes, and local origination requirements. Moreover, according to an official representing an American regional trade association, regulations may appear without advance notice or consultations with affected industries and may not be uniformly enforced. In addition, USDA’s 2014 Country Strategy Statement for Indonesia states that market access challenges for U.S. exports to Indonesia, such as Indonesia’s import licensing requirements, have dominated the U.S.- Indonesia bilateral relationship. The World Bank’s 2015 ease of doing business ranking of 189 economies, where a ranking of 1 indicates the most business-friendly regulations relative to other countries in the rankings, ranked Indonesia at 114. Indonesia ranked least favorably in enforcing contracts (172) and most favorably in ensuring protections for minority investors (43). In assigning the ranking, the World Bank said that Indonesia implemented reforms that reduced the tax burden on companies and made it easier for them to start a business and obtain access to electricity. Corruption. Although the Indonesian government investigates and prosecutes high-profile corruption cases, many investors consider corruption a significant barrier to doing business in Indonesia, according to USTR’s 2015 report on foreign trade barriers. A representative of one U.S. firm told us that after paying taxes to the Indonesian government, the firm may be asked to pay additional fines. U.S. firms and representatives of American regional trade associations also noted that while U.S. firms are bound by U.S. law not to engage in corrupt practices, some of the firms’ competitors do not face similar restrictions. Transparency International’s 2014 Corruption Perceptions Index ranked Indonesia at 107 of 175 countries and territories, where a ranking of 1 indicates the lowest perceived level of public sector corruption relative to other countries in the index. Weak infrastructure. Indonesia has weak and underdeveloped public infrastructure, such as ports, rail, and land transport, which increases transaction costs and inefficiencies and hampers exporters and investors, according to a report by Commerce and State. A representative of a private sector consulting firm operating in Indonesia said that Indonesia has poor infrastructure for transporting goods from factories to port. According to a State official, Indonesia’s economic growth is not likely to increase without significant investment in infrastructure. Violations of intellectual property rights. In 2015, USTR reported that Indonesia was one of 13 countries designated as a Priority Watch List country because of particular problems with respect to intellectual property rights protection, enforcement, or market access for persons relying on such rights. According to the report, the United States is concerned that, among other things, Indonesia’s efforts to enforce intellectual property rights have not been effective in addressing rampant piracy and counterfeiting. Limited access to land. An absence of clear Indonesian laws regarding the acquisition and use of land by investors has slowed infrastructure development projects, according to a State document. For example, the document stated that construction on a hydroelectric dam in West Java, although nearly complete as of January 2015, had been delayed because of land use disputes. A new regulation on land use is scheduled to go into effect in 2015, but a State document noted that this law is untested and that implementation may be erratic, especially in its initial years. Although the United States is engaging economically with Indonesia, the two countries have no free trade agreement (FTA), while China has both trade and investment agreements with Indonesia through its agreements with ASEAN countries. Also, the United States is not negotiating any existing or proposed regional trade agreements with Indonesia, whereas China is engaging Indonesia through a proposed regional trade agreement. Both the United States and China support their domestic firms in Indonesia through financing and other means, although U.S. agencies estimate that Chinese financing has greatly exceeded U.S. financing. The United States and China also have provided support for economic development, with U.S. efforts focused on capacity building and Chinese efforts focused on physical infrastructure development. The United States has not established an FTA with Indonesia, although the two countries have a limited trade framework agreement to facilitate trade relations. The United States–Indonesia Trade and Investment Framework Agreement (TIFA) is intended to facilitate discussions of trade and investment issues. In contrast to FTAs, TIFAs are short agreements that provide strategic frameworks and structure for dialogue on trade and investment issues and prepare countries for eventual accession to high- standard trade agreements. The United States–Indonesia TIFA was signed in 1996 by USTR and Indonesia’s Ministry of Trade. According to USTR, U.S. officials meet regularly with Indonesian officials in both formal TIFA meetings and informal meetings to address bilateral trade and investment issues. The last two formal meetings that U.S. and Indonesian officials held under the TIFA occurred in September 2015 and June 2013, according to USTR. In the September 2015 meeting, officials discussed a range of issues, such as policies related to the information and communications technology sector and Indonesia’s Economic Policy Package. In addition, in June 2015, Congress reauthorized the Generalized System of Preferences (GSP), which provides duty-free treatment for 3,500 tariff lines from many developing countries, including Indonesia, through the end of 2017. According to a report by the Congressional Research Service, in 2012—the last full year of GSP implementation—Indonesia ranked fourth of 127 beneficiary countries in the value of U.S. imports that entered duty free through GSP. According to data in the report, of the $18 billion in U.S. imports from Indonesia in 2012, about 12 percent, or $2.2 billion, entered the United States duty free through GSP. In contrast, China has trade and investment agreements with Indonesia through the China-ASEAN Framework Agreement on Comprehensive Economic Cooperation. The China-ASEAN Framework Agreement on Comprehensive Economic Cooperation comprises a series of agreements on trade and investment to expand access to each other’s markets. The China-ASEAN Trade in Goods Agreement, which entered into force in 2005, is intended to give China and Indonesia, as well as other ASEAN countries, tariff-free access to each other’s market for many goods and reduced most duties for Indonesia’s trade in goods with China to zero by 2012. According to a study by the ADB, in 2010, the average tariff on exports from six ASEAN countries, including Indonesia, to China was 0.1 percent, while the average tariff on Chinese exports to Indonesia was 0.6 percent. The China-ASEAN Trade in Services Agreement, which entered into force in 2007, is intended to provide market access in agreed-on sectors of China and Indonesia, as well as other ASEAN countries, to foreign companies and firms that is equivalent to domestic service providers’ market access in their own countries. The China-ASEAN Investment Agreement, which entered into force in 2010, committed China and Indonesia, as well as other ASEAN countries, to treat each other’s investors as equal to their domestic investors. Selected studies have projected that the China-ASEAN Trade in Goods Agreement generally increases trade for China and Indonesia and improves Indonesia’s economy. All but one of these studies also estimated that the agreement improves China’s economy. In addition, one study estimated that the agreement increases investment in China and Indonesia. In August 2014, China and Indonesia, as well as the other ASEAN countries, announced discussions to upgrade these agreements. In August 2015, China’s Commerce Minister announced that China and ASEAN had agreed to the goal of finalizing negotiations to upgrade these agreements by the end of 2015. Although the United States has concluded negotiations for a regional trade agreement known as the Trans-Pacific Partnership (TPP), Indonesia was not a party to these negotiations. In contrast, China and Indonesia are both parties to ongoing negotiations for the Regional Comprehensive Economic Partnership Agreement (RCEP), which negotiating parties have said they hope to complete in 2015. Indonesia’s trade with China and with the 14 other countries negotiating RCEP represented 66 percent of its total trade in goods in 2013. RCEP negotiating parties seek to expand access to trade and investment among the parties by combining their existing FTAs into a single comprehensive agreement. The United States is not a party to the RCEP negotiations. Our analysis of U.S. agency data showed that in fiscal years 2009 through 2014, the Export-Import Bank of the United States (Ex-Im) and the Overseas Private Investment Corporation (OPIC) provided about $2.5 billion in financing to support U.S. exports to, and investment in, Indonesia (see table 2). Although China does not publish data on its financing in Indonesia, our analysis of State data found that China has financed at least $36.4 billion in investment projects in Indonesia since 2009. Our analysis of Ex-Im and OPIC information for fiscal years 2009 through 2014 found the following. Ex-Im authorized about $2.4 billion in loans, loan guarantees, and insurance to support U.S. exports to Indonesia during this period. Ex-Im’s authorizations in Indonesia consisted mostly of loan guarantees. Ex-Im authorized its two largest loan guarantees in fiscal years 2011 and 2013, when it authorized more than $1.6 billion in guarantees for the purchase of commercial aircraft. OPIC committed about $86 million in financing to U.S. investment projects in Indonesia during this period.OPIC’s largest commitment in Indonesia consisted of a $50 million investment guarantee in fiscal year 2013 for a facility to help expand lending to small and medium- sized enterprises investing in Indonesia. China does not publish data on its financing for exports, imports, and investment in Indonesia by private and state-owned enterprises, but State reported that China has made available at least $36.4 billion in financing for investment projects in Indonesia since 2009. According to State, Chinese financing is generally offered in the form of soft loans by China’s Development Bank and Export-Import Bank. For example, State reported that in 2013, China’s Export-Import Bank financed a $6 billion coal mining infrastructure and transportation project in Papua and Central Kalimantan. In April 2015, China’s President reiterated China’s commitment to provide financing in support of Indonesia’s infrastructure and connectivity development. State, Commerce, and USDA maintain staff in Indonesia to provide export promotion services and to advocate for policies favorable to U.S. firms operating in Indonesia. State. State maintains an Economic and Environment Section at the U.S. Embassy in Jakarta that is organized into three focus areas: environment, science, technology, and health; trade and investment; and macroeconomics and finance. According to State officials, improving economic relations with Indonesia to facilitate greater U.S. trade and investment is a key priority of the section. Commerce. According to a senior Commerce official in Indonesia, Commerce personnel based in Indonesia work to help U.S. firms find local partners, obtain the appropriate licenses and registrations for conducting business in Indonesia, and interpret existing or new laws and regulations, among other duties. The officials said that they also advocate for U.S. firms and lead or support trade missions. For example, Commerce officials led a trade mission focused on clean energy business practices in 2010 and led a trade mission focused on education in 2011. USDA. USDA personnel in Indonesia offer U.S. firms assistance with market access and market development issues, according to a USDA official. For example, according to the official, when Indonesia restricted imports on all U.S. live and processed poultry in response to an avian flu outbreak in Washington and Oregon in late 2014, USDA personnel worked with Indonesia to lift the restriction for U.S. poultry not affected by the outbreak. USDA also cooperates with industry commodity groups and provides market intelligence reports to U.S. firms, according to the official. The Chinese government has pursued agreements with Indonesia to support Chinese firms that do business there. For example: Special economic zones. China’s Ministry of Commerce has worked with Indonesia to establish at least one special economic zone to facilitate cross-border trade and investment, according to Chinese embassy websites. According to the Chinese Ministry of Commerce, the government of China supports Chinese firms that establish and invest in a zone by offering financing and facilitating movement of materials, equipment, labor, and foreign exchange between China and the zone. In establishing these zones, China negotiates with Indonesia and other host governments in the areas of tax, land, and labor policies to support firms that choose to invest in the zones. Currency swaps. China has facilitated cross-border trade in local currencies in Indonesia through the establishment and renewal of a bilateral currency swap arrangement totaling 100 billion Chinese yuan, according to the Central Bank of Indonesia’s website. The bank’s website states that the arrangement promotes bilateral trade and direct investment for economic development between the two countries and helps guarantee stabilized financial markets by ensuring the availability of short-term liquidity. The People’s Bank of China and the Central Bank of Indonesia established the arrangement in March 2009 and renewed it in October 2013 for 3 more years. The United States has fostered economic development in Indonesia through assistance to strengthen governance and energy development. In fiscal years 2009 through 2013, U.S. agencies provided about $373 million in trade capacity building assistance—that is, development assistance intended to improve a country’s ability to benefit from international trade—to Indonesia. U.S. trade capacity building assistance to Indonesia has supported initiatives aimed at, among other things, providing economic policy advisory services to the Indonesian government; strengthening key trade and investment institutions; improving Indonesia’s competiveness in global supply chains; and strengthening the capacity of the government Indonesia to analyze, negotiate, and implement bilateral and multilateral trade agreements. The majority of U.S. trade capacity assistance provided to Indonesia during this period—about 90 percent—was committed as part of a 5-year, $600 million Millennium Challenge Corporation (MCC) compact with Indonesia for a project that is designed to help the government of Indonesia to, among other things, increase productivity and reduce reliance on fossil fuels. (For more information about U.S. trade capacity building assistance to Indonesia, see app. IV.) The United States has also sought to ensure affordable, secure, and cleaner energy supplies in Indonesia and across the Asia-Pacific region through the U.S.-Asia Pacific Comprehensive Energy Partnership with Indonesia, which, according to State, was launched in 2012. China has assisted economic development in Indonesia by supporting Indonesia’s connectivity and infrastructure development as well as its role in regional initiatives. According to a joint statement issued by Chinese President Xi Jinping and Indonesia’s President Widodo in April 2015, China plans to support Indonesia’s infrastructure and connectivity development by providing financing for railways, highways, ports, docks, dams, airports, and bridges, among other things. According to a speech by a senior Chinese official posted on a Chinese embassy website, the power plants built by Chinese firms make up one-quarter of Indonesia’s power supply, and Chinese firms have built Indonesia’s longest cross-sea bridge to facilitate the transport and flow of commerce between the Java and Madura Islands. State reported that between 2006 and 2015, China undertook six power plants, including two coal-fired power plants and a $17 billion, 7,000-megawatt hydropower plant; three rail projects; and a coal mining infrastructure and transportation project. China’s Foreign Minister has publicly stated that Indonesia is the most important partner in its 21st Century Maritime Silk Road Initiative, which, according to a document released by the Chinese government in March 2015, aims to improve maritime cooperation and regional connectivity. In November 2014, China announced the creation of a $40 billion Silk Road Fund to help implement this initiative. In addition, Indonesia is one of 57 prospective founding members of China’s proposed Asian Infrastructure Investment Bank, an international institution to finance infrastructure projects throughout the Asia-Pacific region. Under the bank’s initial agreement, the bank’s authorized capital is $100 billion, of which China has pledged $29.8 billion and Indonesia has pledged $3.4 billion. Bank documents indicate that the bank anticipates beginning operations before the end of 2015. The value of China’s total trade in goods with Vietnam surpassed that of the United States in 2007 and was more than double the value of the United States’ total trade in goods with Vietnam in 2014. However, U.S. imports from Vietnam exceed Chinese imports, while China’s exports to Vietnam exceed the United States’. The United States is Vietnam’s fourth largest trading partner, and China is Vietnam’s largest trading partner. Available data on U.S. and Chinese FDI, although limited, indicate that Chinese FDI in Vietnam from 2007 through 2012 was more than double U.S. FDI in Vietnam during this time. The value of China’s total trade in goods with Vietnam surpassed the United States’ in 2007, and the gap has continued to grow. In 2014, China’s total goods trade with Vietnam was $83.6 billion, while the United States’ was $36.3 billion (see fig. 6). According to Vietnamese and U.S. government officials, an unknown amount of Chinese-Vietnamese trade occurs across the countries’ porous border and outside official channels. Figure 6 illustrates the following: From 1994 through 2014, the United States’ imports from Vietnam exceeded China’s every year except 1994, 1995, and 2000. Chinese exports grew faster than U.S. exports from 1994 through 2014. The United States had an annual trade deficit with Vietnam from 1997 through 2014, while China had an annual trade surplus with Vietnam from 1994 through 2014. Both the U.S. deficit and Chinese surplus have accelerated in recent years. From 2000 through 2014, the composition of U.S. and Chinese total trade in goods with Vietnam shifted from predominantly raw commodities to manufactured goods. In 2014, textiles represented the largest share of U.S. imports from Vietnam (31 percent) and machinery represented the largest share of Chinese imports from Vietnam (47 percent). Animals, plants, and food represented the largest share of U.S. exports to Vietnam (36 percent) in 2014, while machinery represented the largest share of Chinese exports to Vietnam (31 percent). In 2014, the majority of U.S. imports from Vietnam consisted of goods for consumer use, such as wooden bedroom furniture. The majority of U.S. exports to Vietnam and of Chinese imports from, and exports to, Vietnam in 2014 consisted of goods for industrial use, which are used in the production of other goods, such as microchips. See appendix III for more information about the composition and use of the United States’ and China’s trade in goods with Vietnam. China and the United States are Vietnam’s largest and fourth-largest trading partners, respectively, in terms of their combined exports and imports of goods. Other ASEAN countries and the EU are Vietnam’s second and third-largest trading partners. Exports. In 2013, Vietnam exported $24 billion in goods to the United States and $13 billion in goods to China. After the EU, the United States was the second-largest market for Vietnamese goods exports, while China was the fifth-largest market for Vietnamese goods exports in 2013. In both 2004 and 2013, the United States’ share of Vietnam’s exports was around 18 to 19 percent. China’s share of Vietnam’s exports was around 10 percent in both 2004 and 2013. Imports. Vietnam imported $5 billion in goods from the United States, its seventh-largest import market, and $37 billion in goods from China, its largest import market, in 2013. Other ASEAN countries, South Korea, Japan, Taiwan, and the EU represented Vietnam’s second-, third-, fourth-, fifth-, and sixth-largest goods import markets, respectively, in 2013. In both 2004 and 2013, the United States’ share of Vietnam’s imports was around 3 to 4 percent. China’s share of Vietnam’s imports increased significantly during the same period, from 14 percent in 2004 to 28 percent in 2013. Figure 7 shows Vietnam’s exports and imports by trading partner in 2004, 2008, and 2013. Vietnam is a larger export market for China than the United States, but is a larger source of imported goods for the United States than it is for China. Vietnam was China’s seventh-largest export market by value in 2014 but the United States’ 44th-largest. In 2014, China exported $63.7 billion in goods to Vietnam, which accounted for 2.7 percent of China’s global goods exports. In the same year, the United States exported $5.7 billion in goods to Vietnam, which accounted for 0.4 percent of total U.S. global goods exports. Vietnam was China’s 26th-largest source of imported goods by value in 2014 and was the United States’ 15th-largest. In 2014, China imported $19.9 billion in goods from Vietnam, which accounted for 1.0 percent of China’s global goods imports. In the same year, the United States imported $30.6 billion in goods from Vietnam, which accounted for 1.3 percent of total U.S. goods imports from the world. The United States’ role relative to China’s in Vietnam’s trade of goods as well as services may be greater when the amount of intermediate U.S. inputs to the traded goods and services is taken into account. Because of the nature of global supply chains, for example, a consumer phone from a U.S. company might be assembled in China but include components manufactured by Germany, Japan, South Korea, and other countries. Data from the UN Commodity Trade database, which counts the full value of an export for only the exporting country, showed that China exported $29.1 billion in goods to Vietnam in 2011, almost seven times the $4.3 billion in goods that the United States exported to Vietnam that year. However, data from the OECD and the WTO, which attempt to account for the value added to a finished export by each contributing country, show that China exported only about 2.5 times more in value-added goods and services to Vietnam than the United States did. The OECD- WTO data suggest that Chinese exports to Vietnam contained a higher portion of components produced elsewhere than did U.S. exports. Our analysis of data from BEA and other sources on U.S. trade in services in Vietnam provides broad estimates rather than precise values. However, our calculations indicate that U.S. total trade in services with Vietnam totaled approximately $3.1 billion in 2012. Our analysis shows that the United States exported approximately $1.7 billion in services to Vietnam in 2012, with (1) business, professional, and technical services and (2) education as the largest and second-largest service categories by value, and imported approximately $1.4 billion in services from Vietnam in 2012, with (1) travel and passenger fares and (2) transportation services as the largest and second-largest service categories by value. In 2012, the value of U.S.-Vietnamese services trade was about 12 percent of the value of U.S.-Vietnamese goods trade. China does not publish data on its trade in services with Vietnam. Data on FDI in Vietnam from the United States and China have limitations, in that these data may not accurately reflect the countries to which U.S. and Chinese FDI ultimately flows. For example, U.S. and Chinese firms may set up subsidiaries in other countries, which are then used to make investments in Vietnam. Such investments would not be captured by U.S. and Chinese data on FDI in Vietnam. Conversely, U.S. and Chinese firms can set up subsidiaries in Vietnam, which can be used to make investments in other countries. Given these limitations, available data show that from 2007 through 2012, China’s reported FDI flows to Vietnam totaled approximately $1.2 billion, more than twice the U.S. FDI flows of approximately $500 million. During this period, China’s reported annual FDI flows to Vietnam fluctuated but continued to exceed U.S. FDI flows every year except 2009 (see fig. 8). Although BEA does not publicly report data on U.S. FDI flows to Vietnam by type of investment, information that BEA provided to us indicates that from 2003 through 2013, on average, one-third of total U.S. FDI stock in Vietnam was in mining and manufacturing. Mining increased from 22 percent of U.S. FDI stock in Vietnam in 2003 to more than 50 percent in 2013, while manufacturing’s share of total U.S. FDI stock in Vietnam fell from a high of 60 percent in 2006 to 28 percent in 2013. According to officials from Vietnam’s Ministry of Agriculture and Rural Development, Chinese investment projects are mostly in the industrial, manufacturing, and construction sectors. Data on U.S. and Chinese goods exports to Vietnam indicate that since 2008, U.S. exports of goods to Vietnam have been more similar to Japanese and EU exports than to Chinese exports, suggesting that the United States is more likely to compete directly with Japan and EU countries than with China. Figure 9 presents a commonly used index for assessing the similarity of the United States’ goods exports to Vietnam to those of China and other countries. Data from Commerce’s Advocacy Center, the World Bank, and the ADB provide some information about Vietnamese government contracts that U.S. and Chinese firms competed for or won. Although these data represent a small share of U.S. and Chinese economic activity in Vietnam, they offer insights into the degree of competition between U.S. and Chinese firms for the projects represented. These data indicate that U.S. firms in Vietnam have competed more often with firms from other countries than with Chinese firms and have tended to win contracts in different sectors. Commerce’s Advocacy Center. Data from Commerce’s Advocacy Center show that U.S. firms that the center supported in fiscal years 2009 through 2014 competed for Vietnamese government contracts more often, and for higher total contract value, with firms from Japan, South Korea, and several other countries than with Chinese firms (see table 3). According to the center’s data, Chinese firms competed with U.S. firms for 3 of 29 contracts, in the areas of energy and power, infrastructure, and services. These 3 contracts’ total value was $92 million—3 percent of the $28.8 billion in total contract value for which the U.S. firms competed. In contrast, Japanese and South Korean firms competed against U.S. firms for 10 and 6 contracts, respectively, with a combined value of more than $11 billion for each country. World Bank. From 2000 through 2014, U.S. and Chinese firms generally won World Bank-financed contracts in Vietnam in different sectors. Vietnamese firms received about $4.3 billion (70 percent) of the $6.1 billion in total contract value. Among firms from other countries, Chinese firms won the highest total contract value—$531 million—almost 9 percent of the total World Bank-financed contract value. The United States won $133 million, about 2 percent of the total World Bank-financed contract value. Most of the contract dollars won by Chinese firms were for civil works (71 percent) and goods (28 percent). In contrast, most of the contract dollars won by U.S. firms— $118 million (89 percent)—were for consultant services. Electrical equipment was the only category of procurement in which both U.S. and Chinese firms won more than $2 million in contract value. Chinese firms won $140 million, and U.S. firms won $14 million, in contract value for electrical equipment for World Bank projects in Vietnam. ADB. U.S. firms won one ADB contract in Vietnam in 2013 and 2014—a $130,000 contract for consulting services related to water conservation. During this period, Chinese firms won 15 contracts valued at more than $250 million. The Chinese firms’ contracts included about $207 million for the construction of roads and a hydropower plant, with the remainder for goods for electricity transmission, distribution, and renewable energy. U.S. agencies and private sector representatives have articulated multiple challenges to trading and investing in Vietnam. Restrictive regulatory environment. A lack of transparency in the Vietnamese government’s policies and decisions and slowness of government action are creating challenges for U.S. firms, according to State and Commerce. In addition, one U.S. business owner we spoke with in Vietnam described the regulatory environment he dealt with as “arcane, corrupt, and labyrinthine.” According to a State and Commerce report, Vietnam has established regulations that limit the operations of foreign companies in the Vietnamese market. For example, unless a foreign company has an investment license permitting it to directly distribute goods in Vietnam, the company must appoint a local authorized agent or distributor. USTR also reports that Vietnamese government restrictions on certain types of imports, such as used consumer goods, machinery and parts, and some agricultural commodities, affect U.S. firms’ ability to operate in Vietnam. The World Bank’s 2015 Ease of Doing Business Index ranked Vietnam at 78 of 189 economies, where a ranking of 1 indicates the most business-friendly regulations relative to those of other countries in the index. The 2015 index ranked Vietnam most favorably on dealing with construction permits (22) and least favorably on paying taxes (173). In 2015, according to the World Bank, Vietnam implemented reforms that made paying taxes less costly for companies and improved its credit information system. Corruption. Reports by USTR, Commerce, and State cite corruption as a significant barrier faced by U.S. and other foreign firms in Vietnam. In addition, the owner of one small U.S. enterprise whom we spoke with in Vietnam said that onerous audit requirements and paperwork, such as the thick dossier required for obtaining an investment license, created barriers to trading and investing in Vietnam as well as opportunities for corruption. Transparency International’s 2014 Corruption Perceptions Index ranked Vietnam at 119 of 175 countries and territories, where a ranking of 1 indicates the lowest perceived level of public sector corruption relative to other countries in the index. Weak infrastructure. State and Commerce reports cite poorly developed infrastructure, such as electrical and Internet infrastructure, as a challenge for U.S. firms doing business in Vietnam. In 2015, State reported that Vietnam needs an estimated $170 billion in additional infrastructure development in areas such as power generation, roads, railways, and water treatment to meet growing economic demand. According to a representative of one U.S. firm whom we spoke with in Vietnam, the capacity of Haiphong Harbor, a port near Hanoi, was so poor that the firm chose to ship goods to other Vietnamese ports and reload them onto smaller coastal vessels at an increased cost to avoid Haiphong. In addition, a representative of a U.S. clothing manufacturer in Vietnam noted that the capacity of Vietnam’s electrical grid is weak. As a result, the Vietnamese government occasionally institutes controlled brownouts—generally on days when the garment manufacturing plants are not operating—to try to alleviate strain on the electrical grid. According to the clothing manufacturer’s representative, any expansion of the garment industry could be limited without additional electrical capacity. Violations of intellectual property rights. In 2015, USTR reported that Vietnam remained designated as a Watch List country because of concerns about intellectual property rights violations and theft. According to USTR, online piracy and sales of counterfeit goods are common; in addition, Vietnamese firms manufacture counterfeit goods. Moreover, Vietnam’s capacity to enforce criminal penalties against counterfeiters is limited. Commerce similarly cited ineffective protection of intellectual property as a significant challenge. In addition, a representative of a technology company whom we spoke with in Vietnam stated that only 1 in 20 users of the company’s software were paying for its use and that Vietnamese consumers knowingly purchase counterfeits. Predominance of state-owned enterprises. According to a Commerce and State report about Vietnam’s business environment, state-owned enterprises dominate some sectors of the Vietnamese economy and receive some trade advantages over foreign firms. For example, according to the report, state-owned enterprises dominate the oil and gas, electricity, mining, and banking sectors, among others. The top three telecommunications companies in Vietnam are also state-owned enterprises and control nearly 95 percent of the Vietnam telecommunications market. Similarly, a private sector representative we spoke with in Vietnam stated that the Vietnamese government controls approximately 80 percent of Vietnam’s insurance market. Moreover, according to a 2015 USTR National Trade Estimates Report on Foreign Trade Barriers, Vietnam’s state-owned trading enterprises have been given the exclusive right to import certain products, including tobacco products; crude oil; newspapers, journals, and periodicals; and recorded media. In addition, since U.S. and other foreign firms are restricted from majority ownership in some sectors, including telecommunications and banking, they must partner with a domestic firm—generally a state-owned enterprise—to conduct business in these sectors. However, Commerce and State have reported that few Vietnamese firms, including state-owned enterprises, are audited against international standards and, as a result, U.S. firms have difficulty verifying the financial information of prospective partners. Shortages of skilled labor. Commerce and State reporting cited shortages of skilled labor as constraints to U.S. firms. In addition, a representative of one firm whom we interviewed in Vietnam noted that a lack of skilled labor in engineering limited the firm’s ability to support the modernization of factory equipment. The United States has no FTA with Vietnam but both are participants in the proposed regional TPP agreement, along with other countries. In contrast, China has free trade and investment agreements with Vietnam through its agreements with ASEAN countries and is negotiating the proposed RCEP agreement with Vietnam and other countries. Both countries support their domestic firms in Vietnam through financing and other means, but U.S. agencies estimate that China has provided a larger amount of financing than the United States. In addition, the United States and China have each supported economic development in Vietnam, with U.S. efforts focused on capacity building to improve Vietnam’s economic governance and Chinese efforts focused on improving physical infrastructure and connectivity. While the United States does not have an FTA with Vietnam, the two countries have a bilateral trade agreement (BTA) to facilitate their trade relations. The United States–Vietnam BTA, which the United States signed in 2000, enabled the establishment of normal trade relations with Vietnam—significantly reducing tariffs for many Vietnamese exports—and incorporated elements modeled on WTO agreements. As a result of the BTA, according to a 2014 study, the average U.S. tariff for Vietnamese manufacturing exports, such as textiles, fell from 33.8 percent to 3.4 percent. According to the U.S.-Vietnam Trade Council, under the BTA, Vietnam agreed to reduce tariffs, typically by one-third to one-half, on a broad range of products of interest to U.S. businesses, including toiletries, film, mobile phones, tomatoes, and grapes. USTR officials stated that the BTA remains in effect and contains some provisions beyond those required by the WTO. Since Vietnam joined the WTO, the majority of U.S. exports of manufactured and agricultural goods have faced Vietnamese tariffs of 15 percent of less, according to a USTR Trade Fact Sheet. However, according to a report by Commerce and State, U.S. businesses have noted that eliminating high tariffs on certain agricultural and manufactured goods, including fresh food, fresh and frozen meats, and materials and machinery, would create significant new opportunities. In contrast, China has free trade and investment agreements with Vietnam through the ASEAN-China Comprehensive Economic Cooperation Agreement. The China-ASEAN Framework Agreement on Comprehensive Economic Cooperation comprises a series of agreements, on trade in goods, trade in services, and investment, to expand China’s and ASEAN countries’ access to each other’s markets. The China-ASEAN Trade in Goods Agreement, which entered into force in 2005, is intended to give China and Vietnam, as well as other ASEAN countries, tariff-free access to each other’s markets for many goods and will reduce most duties for Vietnam’s trade in goods with China to zero by 2018. According to a study by the ADB, the average tariff on ASEAN countries’ exports to China was 0.1 percent in 2010, and 90 percent of Chinese exports are expected to face no tariffs in Vietnam by 2015. In January 2015, Vietnam’s Ministry of Finance stated that it had implemented the commitments it had made in the agreement to reduce tariffs. The China-ASEAN Trade in Services Agreement, which entered into force in 2007, is intended to provide market access in agreed-on sectors of China and Vietnam, as well as other ASEAN countries, to foreign companies and firms located in participant countries that is equivalent to domestic service providers’ market access in their own countries. The China-ASEAN Investment Agreement, which entered into force in 2010, is intended to commit China and Vietnam, as well as other ASEAN countries, to treat each other’s investors as equal to domestic investors. Selected studies have projected that the China-ASEAN Trade in Goods Agreement generally increases trade for China and Vietnam. All but two of these studies also estimated that the agreement improves the economies of both China and Vietnam. In addition, one study estimated that the agreement increases investment in China and Vietnam. In August 2014, China and Vietnam, as well as the other ASEAN countries, announced discussions to upgrade these agreements. The second round of discussions, held in February 2015, focused on investment, economic cooperation, and other areas. In August 2015, China’s Commerce Minister announced that China and ASEAN had agreed to the goal of finalizing negotiations on the upgrade by the end of 2015. The United States and Vietnam are participants in the proposed TPP, while China and Vietnam are participants in the ongoing RCEP negotiations. TPP. The United States, Vietnam, and 10 other countries have negotiated the TPP, with an agreement announced in October 2015. TPP negotiating parties agreed in 2011 that the TPP would address ensuring a competitive business environment and protecting the environment, labor rights, and intellectual property rights, among other issues. China is not a party to the TPP negotiations. RCEP. China, Vietnam, and 14 other countries are parties to the RCEP negotiations, which negotiating partners have said they hope to complete in 2015. Vietnam’s trade with the other countries negotiating RCEP, including China, represented 58 percent of its total trade in goods for 2013. RCEP negotiating parties seek to expand access to trade and investment among the parties by combining their existing FTAs into a single comprehensive agreement. The United States is not a party to the RCEP negotiations. Vietnam has embraced TPP as part of its overall efforts to increase trade and access to foreign markets, particularly in the United States, according to State officials. State officials noted that Vietnam will need to overcome several challenges to meeting TPP requirements. In addition, according to State officials, TPP’s labor and alternative dispute resolution requirements may be challenging for Vietnam to implement. However, State officials noted that Vietnam has shown a commitment to improving its economic governance. According to U.S. officials, the dispute between Vietnam and China over China’s placement of an oil rig near the disputed Paracel Islands in May through July 2014 briefly disrupted Chinese and Vietnamese trade. The officials noted that the incident also highlighted for Vietnamese officials the importance of their economic relationship with China and the need to diversify Vietnam’s trade. According to State officials, China responded to Vietnamese riots and attacks on Chinese firms and individuals by slowing customs procedures and tightening controls at the typically porous China- Vietnam border. According to U.S. officials, after the riots, Vietnam reviewed its economic relationship with China but found that it could not afford to reduce its reliance on China. For example, according to the U.S. officials, Vietnamese officials had not known exactly how intertwined Vietnam’s economy was with China’s because of the amount of undocumented cross-border trade. According to testimony before the U.S.-China Economic and Security Review Commission in May 2015, Vietnam relies on China for a number of intermediate goods as inputs for its exports; therefore, any disruptions to trade flows could spread throughout the Vietnamese economy. Our analysis of U.S. agency data showed that in fiscal years 2009 through 2014, Ex-Im and OPIC provided approximately $205 million in financing for exports to, and investment in, Vietnam (see table 4). Although China does not publish data on its financing in Vietnam, our analysis of State-reported data found that China has financed at least $4.5 billion in investment projects in Vietnam since 2008. Our analysis of Ex-Im and OPIC information for fiscal years 2009 through 2014 found the following. Ex-Im authorized about $148.9 million in loans, loan guarantees, and insurance to support U.S. exports in Vietnam. In fiscal year 2012, Ex-Im’s largest authorization in Vietnam consisted of a $118 million direct loan to the government of Vietnam to purchase a telecommunications satellite. In fiscal year 2013, Ex-Im authorized $16.7 million for a long-term loan to Vietnam’s National Power Transmission Corporation to purchase electricity transmission equipment. OPIC committed about $55.6 million in financing to U.S. investment projects in Vietnam.In 2014, OPIC committed to provide an investment guarantee of up to $50 million for the Mekong Renewable Resources Fund, which will invest in the environmental services and infrastructure sector, the renewable energy sector, and the energy efficiency sector in Vietnam, Cambodia, and Laos. China does not publish data on its financing for exports, imports, and investment in Vietnam by private and state-owned enterprises. However, according to information provided by the U.S. Embassy in Hanoi, China made available approximately $4.5 billion in financing from 2008 to 2013 for coal-fired power plants and for part of the Hanoi rail transit system, all constructed by Chinese firms. China’s Export-Import Bank has also published brief summaries of major projects for some countries, such as Vietnam. One such summary indicates that the bank provided a concessional loan in 2013 to support the construction of a chemical plant in Vietnam to manufacture fertilizer. In addition, China provides financing and labor in support of projects in Vietnam. According to State officials, Vietnam’s importation of Chinese labor for technical positions enhances China’s role in the Vietnam economy because the Vietnamese labor market lacks the capacity to fill midlevel technical positions. However, according to testimony before the U.S.-China Security Review Commission in May 2015, local Vietnamese have sometimes resented the importation of Chinese labor. According to State officials, such resentment contributed to the riots and violence in Vietnam after China placed the oil rig in the disputed Paracel waters. State, Commerce, and USDA maintain staff in Vietnam to provide export promotion services and policy advocacy for U.S. firms operating in Vietnam. For example: State. State’s Economic Section at the U.S. Embassy in Hanoi advocates for U.S. investors and for trade and investment policies favored by the United States, according to a senior State official. The official said that the section also supports the negotiation of U.S. trade agreements, such as TPP, and other types of economic agreements, including a United States–Vietnam agreement related to taxation. Commerce. According to Commerce officials in Vietnam, Commerce personnel based in the country assist U.S. firms by, among other things, matching them with local partners, organizing trade missions, and providing advocacy. For example, the Commerce officials said that they organized a trade mission and provided advocacy for U.S. civil nuclear firms. Another Commerce official told us that Commerce officials had worked with the Vietnamese government to remove an illegal duty on goods that a U.S. company was importing into Vietnam. USDA. USDA personnel help address market access and development issues in Vietnam for U.S. agricultural products, according to a USDA official in Vietnam. For example, according to the official, USDA personnel track Vietnamese government regulations that would affect U.S. agricultural products and provide comments to the Vietnamese government as needed. The official noted that USDA personnel also work directly with the Vietnamese government to help U.S. firms retrieve stranded cargo, particularly perishable goods, from Vietnamese customs. For instance, one firm’s product was delayed in customs because it lacked a plant quarantine certificate that is not required in the United States. The Chinese government has also acted to support Chinese firms that do business in Vietnam. For example, according China’s Ministry of Foreign Affairs, China and Vietnam have established two economic cooperation zones in Vietnam, near Ho Chi Minh City and in Haiphong City, to facilitate trade and investment by offering tax and other advantages for Chinese firms that invest in the zone. U.S. agencies have assisted Vietnam in increasing economic openness and integration and improving economic governance. In fiscal years 2009 through 2013, the U.S. agencies provided a total of $32 million in trade capacity building assistance—that is, development assistance intended to improve a country’s ability to benefit from international trade—to Vietnam. U.S. trade capacity building assistance to Vietnam has supported initiatives aimed at, among other things, modernizing Vietnam’s commercial laws and legal system, providing assistance to Vietnam relevant to its trade agreement commitments, improving the country’s customs and border control, and supporting potential U.S. investment opportunities. The majority of U.S. trade capacity building assistance to Vietnam during this period—about 64 percent—was provided by the U.S. Agency for International Development (USAID) to, for example, improve Vietnam’s regulatory environment to support economic growth and a better business and trade environment. For more information about U.S. trade capacity building assistance to Vietnam, see appendix IV. China has assisted Vietnam’s economic development through infrastructure construction as well as efforts to develop connectivity between China and Southeast Asian countries. According to the U.S. Embassy in Hanoi, China provided about $4.5 billion of approximately $10.8 billion in large infrastructure construction projects awarded to Chinese firms in Vietnam from 2008 to 2014. These infrastructure projects included power plants, processing plants, and a railway (see fig. 10). The report noted that the remaining funding for infrastructure construction was provided by Australia, ADB, and the World Bank and through joint ventures. In addition, according to the U.S. Embassy in Hanoi, as of 2014, Chinese firms had won contracts to build 15 of 24 new thermal power plants in Vietnam. In late 2013, China and Vietnam agreed to the implementation of the Shenzhen-Haiphong trade corridor to link the Vietnamese port city of Haiphong to Shenzhen in China. According to testimony before the U.S.-China Security Review Commission in May 2015, China has also announced that it will help upgrade the Haiphong port to accommodate large container ships. In addition, through the ADB-supported Greater Mekong Subregion (GMS) Economic Cooperation program, Vietnam and China are participating in a plan to connect Vietnam and other mainland Southeast Asian countries with each other and with China through a series of economic corridors that include improving transportation infrastructure. ADB’s GMS Strategic Framework identifies corridors, including an eastern corridor running north-to-south and connecting China and Vietnam; an east-west corridor connecting Burma, Thailand, Laos, and central Vietnam; and a southern corridor connecting Burma, Thailand, Cambodia, and southern Vietnam. For example, according to Chinese government reporting, the $952 million Hanoi to Lao Cai freeway, which a Chinese contractor is building, is part of the GMS strategic framework. Similarly, the Master Plan on ASEAN Connectivity envisions a rail link through Vietnam connecting the interior of China with Singapore and connecting the capital cities in Vietnam, Cambodia, and Thailand with a spur line to the capital of Laos. This rail link would complement the various transport corridors under the GMS and other existing transport networks, with the aim of creating an integrated transport network throughout Southeast Asia and Asia as a whole. The railway running from China to Ho Chi Minh City in the south of Vietnam is already complete. The Master Plan on ASEAN Connectivity also calls for a network of highways meeting certain quality standards and connecting Vietnam with all of its neighbors, including China. Vietnam has constructed its portions of the highway network. Vietnam is one of 57 prospective founding members of China’s proposed Asian Infrastructure Investment Bank, an international institution to finance infrastructure projects throughout the Asia-Pacific region. Under the bank’s initial agreement, the bank’s authorized capital is $100 billion, of which China has pledged $29.8 billion and Vietnam has pledged $663 million. Bank documents indicate that the bank anticipates beginning operations before the end of 2015. We provided a draft of this report for review and comment to the Departments of Agriculture, Commerce, State, and the Treasury and to MCC, OPIC, USAID, Ex-Im, the U.S. Trade and Development Agency, and USTR. We received technical comments from Commerce, State, Treasury, MCC, OPIC, Ex-Im, USTR, which we incorporated as appropriate. We are sending copies of this report to the Secretaries of Agriculture, Commerce, State, and the Treasury; the Chairman of Ex-Im; the Administrator of USAID; the U.S. Trade Representative; the Director of the U.S. Trade and Development Agency; the Chief Executive Officers of OPIC and MCC; and other interested parties. In addition, the report is available at no charge on the GAO website at www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3149 or [email protected]. Contact points for our Offices of Public Affairs and Congressional Relations may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix V. We examined available information about U.S. and Chinese trade and investment, competition, and actions to further economic engagement in Indonesia and Vietnam. This report is a public version of a sensitive but unclassified report that we are issuing concurrently. This report addresses the same objectives, and employs the same methodology, as the sensitive report. We conducted fieldwork in Jakarta, Indonesia, and in Hanoi and Ho Chi Minh City, Vietnam. We based our selection of these two countries, among the 10 members of the Association of Southeast Asian Nations (ASEAN), on the amounts of U.S. and Chinese exports to, and imports from, each country; foreign direct investment (FDI) in each country; and development assistance in each country. We also considered whether (1) a country participated in U.S. and Chinese trade agreements or was a negotiating partner in the Trans-Pacific Partnership, (2) any regional institutions were located in the country, (3) the country was an emerging partner based on gross domestic product, and (4) the country was a South China Sea claimant. To describe U.S. and Chinese trade and investment in Indonesia and Vietnam, we analyzed data on U.S. and Chinese trade in goods, trade in services, and FDI. To assess the reliability of these data, we cross- checked the data on trade in goods and FDI for internal consistency, and consulted with U.S. officials on the data on trade in goods and the U.S. data on trade in services and FDI. Because of the limited availability of data and the differing contexts for the data sets we report, the time period for each of these data sets varied. We determined that the data were sufficiently reliable for the purposes of our report and have noted caveats, where appropriate, to indicate limitations in the data. To obtain data on U.S. and Chinese trade in goods from 1994 through 2014, we accessed the United Nations’ Commodity Trade Statistics (UN Comtrade) database through the U.S. Department of Commerce’s (Commerce) Trade Policy Information System. The UN Comtrade database provides data for comparable categories of exports and imports of goods for the United States and China. Because, according to a Commerce official, the goods exports data that China reports to the UN Comtrade database do not distinguish total exports from re-exports (i.e., goods that are first imported and then exported in substantially the same condition), we used data on total goods exports, which include re-exports, to ensure the comparability of U.S. and Chinese data on goods exports. The data on goods exports from the UN Comtrade database show the free-on-board prices of the goods, which exclude transportation and insurance charges. For imports, we used data on general imports, which include goods that clear customs as well as goods that enter bonded warehouses or foreign trade zones. The data on goods imports show the prices paid for the goods, including the cost of freight and insurance. We determined that the UN Comtrade data on trade in goods for the United States and China were generally reliable for comparing trends over time and the composition of trade. To categorize the goods traded by the United States and China, we assigned each good recorded in the UN Comtrade database to one of the UN’s three Broad Economic Categories—capital, intermediate, or consumer. For goods that the UN does not classify as capital, intermediate, or consumer, we created an unclassified category. For example, the UN does not classify passenger motorcars as capital or consumer goods. To examine each country’s trade in goods with its trading partners over time, we analyzed data from the ASEANstats database for 2003, 2008, and 2013 for Indonesia and 2004, 2008, and 2013 for Vietnam. Because some of Indonesia’s and Vietnam’s trading partners do not report data to the UN Comtrade database, we used data from the ASEANstats database as a comprehensive set of data on trade in goods for all of Indonesia’s and Vietnam’s trading partners. We compared trade data from the ASEANstats and the UN Comtrade databases and found some differences in values of bilateral trade between Indonesia and Vietnam and their trading partners. Reasons for the differences include differences in the valuation of goods, differences in data quality, and the omission of some Indonesia and Vietnam trading partners from UN Comtrade data. We determined that the data from the ASEANstats database for Indonesia and Vietnam were generally reliable for comparing each country’s trade in goods with its trading partners over time. We determined that the data from the ASEANstats database for Indonesia and Vietnam were generally reliable for comparing each country’s trade in goods with its trading partners over time. To illustrate the importance of accounting of a country’s exports that originate in other countries, we analyzed data from the Organisation for Economic Co-operation and Development (OECD) and the World Trade Organization (WTO) on trade in value-added goods and services. For U.S. trade in services with Indonesia, we used publicly available data from Commerce’s Bureau of Economic Analysis (BEA). BEA’s data on trade in services with Vietnam for several categories—travel and passenger fares, transportation, education, and “other” private services— are based on data from various sources. According to BEA, its survey data are from mandatory surveys of primarily U.S. businesses with services trade that exceeds certain thresholds. BEA does not survey a random sample of U.S. businesses and therefore does not report the data with margins of error. We calculated the value of U.S. trade in services with Vietnam for 2012 based on tabulations prepared for us by BEA and other sources, including the U.S. Census Bureau. Our estimates of U.S. trade in services with Vietnam represent broad estimates rather than precise values. We extrapolated values for certain services at the country level from broader data (e.g., we calculated values for travel services by multiplying the number of travelers for Vietnam by the average traveler expenditure for the region). We calculated values for other services (e.g., business, professional, and technical services) from a range of estimates based on survey data. When the volume of trade for a service was presented as a range, we used the midpoint value to estimate the volume of trade for that service. When the volume of trade for a service was presented as a range and described by BEA as trending upward, we used the lowest value for the earlier years and the highest value for the later years. For data on U.S. firms’ investments in Indonesia and Vietnam from 2007 through 2012, we used data that we obtained directly from BEA. For Chinese firms’ investments, we used data from the UN Conference on Trade and Development as reported by China’s Ministry of Commerce. To identify patterns in, and to compare, U.S. and Chinese FDI, we used U.S. and Chinese data on FDI and noted in our report the following limitations. As we have previously reported, both U.S. and Chinese FDI may be underreported, and experts have expressed particular concern regarding China’s data. U.S. and Chinese firms set up subsidiaries in places such as the Netherlands and the British Virgin Islands, which can be used to make investments that are not captured by U.S. and Chinese data on FDI. Experts state that this could be a significant source of underreporting of China’s FDI. According to BEA, data on U.S. FDI are based on quarterly, annual, and benchmark surveys. BEA’s benchmark survey is the most comprehensive survey of such investment and covers the universe of U.S. FDI. BEA notes that its quarterly and annual surveys cover samples of businesses with FDI that exceed certain thresholds. Because BEA does not survey a random sample of businesses, and therefore does not report the data with margins of error, our report does not include margins of error for BEA data. China does not provide a definition of FDI when reporting FDI data. However, the types of data included in Chinese FDI data (e.g., equity investment data and reinvested earnings data) appear similar to data reported for U.S. FDI, for which the United States uses OECD’s definition. Despite the limitations of China’s FDI data, various reports, including those published by international organizations such as the International Monetary Fund (IMF), government agencies, academic experts, and other research institutions, use China’s reported investment data to describe China’s FDI activities. In addition, despite some potential underreporting of FDI data, we determined that the FDI data were reliable for reporting general patterns when limitations are noted. Because of challenges in determining appropriate deflators for some data, we used nominal rather than inflation-adjusted values for U.S. and Chinese trade and investments in Indonesia and Vietnam. However, we first tested the impact of deflating these values and found a limited impact for descriptions of the overall trends. For example, using the U.S. gross domestic product deflator to remove inflation in the goods trade values included in this report would cause total Chinese trade in goods with Indonesia to surpass total U.S. trade in goods in 2005, similar to trends shown for nominal trade values. U.S. total trade in goods in Indonesia increased by a factor of 2.8 from 1994 through 2014 if not adjusted for inflation and by a factor of 1.9 if adjusted for inflation. Over the same period, Chinese total trade in goods increased by a factor of 24.1 in Indonesia if not adjusted for inflation and by a factor of 16.3 if adjusted for inflation. To assess the extent of competition between exporters from the United States, China, and other countries, we calculated an export similarity index to compare U.S., Chinese, and other countries’ exports to Indonesia and Vietnam in 2006 through 2014. The export similarity index is a measure of the similarity of exports from two countries to a third country. For example, to calculate the index for U.S. and Chinese exports to Indonesia and Vietnam, we first calculated, for each type of good that the United States and China exports, the share of that good in the United States’ and China’s total exports to Indonesia and Vietnam. We then took the minimum of the United States’ and China’s shares. The index is the sum of the minimum shares for all types of goods that the United States and China export to Indonesia and Vietnam. We used data on goods exports from the UN Commodity Trade database at the four-digit level and calculated each country’s export of a particular good as a share of that country’s total exports to Indonesia and Vietnam. We also analyzed data from Commerce’s Advocacy Center on host- government contracts and data for contracts funded by the Asian Development Bank (ADB) and World Bank. Although these data represent a small share of activity in Indonesia and Vietnam, they provide insights into the degree of competition between U.S. and Chinese firms for the projects represented. Commerce’s Advocacy Center data comprised cases where U.S. firms requested the agency’s assistance in bidding for host- government contracts in either Indonesia or Vietnam from 2009 through 2014. Because these data included the nationality of other firms bidding on a host-government contract, we used this information to determine the extent to which Chinese firms or firms of other nations were competing with U.S. firms for these contracts. We counted the numbers of contracts and summed the value of contracts for which each foreign country’s firms competed against U.S. firms. For Vietnam, we excluded five contracts for which the nationalities of competitors were not identified. In cases where foreign competitors comprised a consortium of firms from different countries, we counted the whole value of the contract in each competing nationality’s total. We also used the Advocacy Center’s classification of contracts by sector to determine the sectors in which Chinese firms competed for the highest proportion of contracts. To determine the reliability of these data, we manually checked the data for missing values and also reviewed information about the data’s collection. In addition, we interviewed Advocacy Center staff about the data. Advocacy Center staff told us that data from before 2010, when the center began using a new database, may be incomplete because data for some contracts that were closed before 2010 may not have been transferred to the new database. Overall, we found the Advocacy Center data to be reliable for reporting on competition between U.S. and other firms, including Chinese firms, in Indonesia and Vietnam. The World Bank publishes data on the value, sector, and suppliers of its contracts in Indonesia and Vietnam. We used the World Bank’s classification of contracts into procurement categories (goods, civil works, consultant services, and nonconsultant services) to compare the value and types of contracts that U.S. and Chinese firms won from 2001 through 2014. However, we combined the consultant services and nonconsultant services categories into one category, “consultant and other services.” The World Bank data include contracts that were reviewed by World Bank staff before they were awarded. To determine the reliability of these data, we electronically checked the data for missing values and possible errors. We also contacted World Bank personnel to learn how the data were collected and identify any limitations of the data. We found that the data for contracts funded by the World Bank were generally reliable for the purpose of demonstrating U.S. and Chinese competition in Indonesia and Vietnam over time. We used ADB’s published data on the value, sector, and recipient of its contracts for consulting services, goods, and civil works provided as technical assistance or funded by loans and grants to Indonesia and Vietnam in 2013 and 2014 to compare the value and types of contracts won by U.S. and Chinese firms. ADB only publishes data for consulting contracts over $0.1 million in value and other contracts over $1.0 million, so our analysis of ADB contracts does not include some smaller ADB contracts. In addition, a portion of the ADB data did not have the contracts classified according to the nature of the contract (construction, consulting services, goods, turnkey, and others). Therefore, we classified contracts won by U.S. and Chinese firms that were missing these categories according to those used in the rest of the data. To determine the reliability of these data, we checked the data for missing values and other types of discrepancies. We found that the ADB data were generally reliable for our purpose of reporting on U.S. and Chinese competition in Indonesia and Vietnam in 2013 and 2014. To identify the challenges that U.S. firms face when conducting business in Indonesia and Vietnam, we reviewed the Office of the United States Trade Representative’s (USTR) 2014 and 2015 National Trade Estimate Reports on Foreign Trade Barriers and its 2015 Special 301 Report on intellectual property rights protections. We reviewed the U.S. Department of Agriculture’s (USDA) country strategies for Indonesia and Vietnam, Department of State (State) cables, and Commerce and State’s 2014 reports on doing business in Indonesia and Vietnam. We also interviewed representatives of 12 U.S. firms in Indonesia and Vietnam, in sectors such as agriculture and manufacturing, as well as representatives of five private sector and research organizations, such as the American Chamber of Commerce-Vietnam and the Center for Strategic and International Studies. The views expressed in these interviews are not generalizable. To examine the actions that the U.S. and Chinese governments have taken to further economic engagement in Indonesia and Vietnam, we reviewed regional and country studies and U.S. and Chinese agency documents and interviewed U.S. and third-country officials, officials from private sector business associations, and experts from research institutes. We tried to arrange visits with Chinese government officials in Indonesia and Vietnam and in Washington, D.C.; however, they were unable to accommodate our requests for a meeting. U.S. agencies included in the scope of our study were USDA, Commerce, State, the Department of the Treasury, USTR, the Millennium Challenge Corporation, the U.S. Agency for International Development (USAID), the Export-Import Bank of the United States (Ex-Im), the Overseas Private Investment Corporation (OPIC), and the U.S. Trade and Development Agency. To obtain information about U.S. and Chinese trade agreements with Indonesia and Vietnam, we reviewed the trade agreements; U.S. and Chinese government documents; studies from research institutions; prior GAO reports; and documents from multilateral organizations, such as WTO. We identified studies assessing the effect of the China- ASEAN free trade agreement on China’s, Indonesia’s, and Vietnam’s economies by searching the ProQuest database (which includes the EconLit database) and the studies of international organizations such as ADB, and we selected and reviewed studies that estimated the impact of the agreement on these three economies. We also interviewed U.S. officials in Indonesia and Vietnam, officials from private sector business associations, and experts from research institutes. To calculate the percentage of Indonesia’s and Vietnam’s total goods trade represented by their trade with the participants in the Regional Comprehensive Economic Partnership Agreement, we used data on trade in goods from the ASEANstats database. To determine the reliability of these data, we compared trade data from the ASEANstats and the UN Comtrade databases and found some differences in values of bilateral trade between ASEAN countries and their trading partners. Reasons for the differences include differences in the valuation of goods, differences in data quality, and the omission of some ASEAN trading partners from UN Comtrade data. We determined that the data from the ASEANstats database for Indonesia and Vietnam were generally reliable for comparing each country’s trade in goods with its trading partners. To obtain information about U.S. financing in Indonesia and Vietnam, we compiled Ex-Im and OPIC data from these agencies’ annual reports and congressional budget justifications and interviewed agency officials to provide additional context and to clarify elements of the data. Where relevant, we note that additional Ex-Im insurance may include Indonesia and Vietnam but do not include these data in our totals. To determine the reliability of these data, we interviewed agency officials and checked their published annual reports against agency-provided summary data to determine any limitations or discrepancies in the data. We determined that data from Ex-Im and OPIC were generally reliable for presenting trends and aggregate amounts by year. To document U.S. efforts to provide export promotion services in Indonesia and Vietnam, we reviewed information on State’s Economic Sections at the U.S. Embassy in Indonesia and Vietnam and interviewed State, Commerce, and USDA officials in Washington, D.C., and in Vietnam and Indonesia. To describe Chinese financing in Indonesia and Vietnam, we used information reported by State and China’s Export-Import Bank. We also interviewed private sector and research institute representatives. To document Chinese support for firms in Indonesia and Vietnam, we used publicly available information from a variety of sources, including Chinese embassy websites; the Bank of Indonesia’s website; China’s Ministry of Commerce; and Xinhua, China’s state press agency. To document U.S. support for economic development and integration in Indonesia and Vietnam, we used the USAID trade capacity building database to capture U.S. development assistance efforts related to trade in Indonesia and Vietnam. USAID collects data to identify and quantify the U.S. government’s trade capacity building activities in developing countries through an annual survey of agencies on behalf of USTR. We also reviewed agency project summaries and interviewed agency officials in Washington, D.C., and in Indonesia and Vietnam. To determine the reliability of these data, we interviewed agency officials regarding their methods for compiling and reviewing the data. We determined that data from USAID’s trade capacity building database were sufficiently reliable for our purposes. To describe China’s support for regional integration in Indonesia, we assessed public statements from Chinese and Indonesian officials and information reported by U.S. agencies, including State, and we interviewed U.S. and Indonesian officials. To describe China’s support for regional integration in Vietnam, we assessed information reported by U.S. agencies, including State and USAID, and interviewed U.S. and Vietnamese officials. We also reviewed publicly available information on the Asian Infrastructure Investment Bank’s website. We conducted this performance audit from April 2014 to October 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. From 2000 through 2014, the composition of U.S. and Chinese trade in goods with Indonesia, in terms of value, remained relatively stable except for a significant increase in China’s mineral imports (see figs. 11 and 12). Textiles represented the largest share of U.S. imports from Indonesia since 2005. China’s mineral imports increased from 25 percent of its total imports from Indonesia in 2000 to a peak of 58 percent in 2013 before declining to 42 percent in 2014. Animals, plants, and food generally represented the largest share of U.S. exports to Indonesia from 2005 through 2014, and machinery represented the largest share of Chinese exports to Indonesia from 2000 through 2014. In 2014, almost half of the United States’ and most of China’s goods trade with Indonesia consisted of goods for industrial use, most of which are intermediate goods (see fig. 13). Among the industrial goods that the United States traded with Indonesia, rubber was the top U.S. industrial import and cotton was the top U.S. industrial export in 2014. Among the industrial goods that China traded with Indonesia in 2014, coal was the top Chinese industrial import and phones for cellular and other networks were the top Chinese industrial export. In 2014, the United States exported $1.9 billion of civilian aircraft, aircraft engines, and aircraft parts—the overall top U.S. export to Indonesia, which represents 23 percent of U.S. exports to Indonesia and includes capital, intermediate, and consumer goods. From 2000 through 2014, the composition of U.S. and Chinese trade in goods with Vietnam generally shifted, in terms of value, from predominantly raw commodities to manufactured goods (see figs. 14 and 15). In 2000, the largest share of U.S. imports from Vietnam consisted of animals, plants, and food, while the largest share of Chinese imports from Vietnam consisted of minerals. However, by 2014, the largest share of U.S. imports from Vietnam consisted of textiles, which rose from 6 percent of U.S. imports in 2000 to 31 percent in 2014, while the largest share of Chinese imports consisted of machinery, which rose from 1 percent in 2000 to 47 percent in 2014. From 2000 through 2014, animals, plants, and food grew to represent the largest share of U.S. exports to Vietnam, while machinery grew to represent the largest share of Chinese exports to Vietnam. In 2014, the majority of U.S. imports from Vietnam consisted of goods for consumer use, while the majority of U.S. exports to Vietnam—as well as Chinese imports from, and exports to, Vietnam—consisted of goods for industrial use (see fig. 16). Among the consumer goods that the United States and China traded with Vietnam, wooden bedroom furniture was the top U.S. import and nuts were the top U.S. export, while cameras were the top Chinese import and women’s and girl’s cotton jackets and blazers were the top Chinese export. Among the industrial goods that the United States and China traded with Vietnam, portable digital automatic data processing machines were the top U.S. import and cotton was the top U.S. export, while microchips were the top Chinese import and phone- set parts were the top Chinese export. U.S. agencies have identified certain official development assistance to Indonesia and Vietnam as trade capacity building assistance. This assistance addresses, for example, the countries’ regulatory environment for business, trade, and investment; constraints such as low capacity for production and entrepreneurship; and inadequate physical infrastructure, such as poor transport and storage facilities. In fiscal years 2009 through 2013, U.S. agencies provided about $373 million in trade capacity building assistance to Indonesia (see table 5). As table 5 shows, three agencies—the Millennium Challenge Corporation (MCC), the U.S. Agency for International Development (USAID), and the Department of Labor (Labor)—provided the largest amounts of U.S. trade capacity building assistance to Indonesia in fiscal years 2009 through 2013. MCC provided about $333 million—about 90 percent of U.S. trade capacity building assistance to Indonesia during this period—as part of a 5-year, $600 million compact with Indonesia. One of the compact’s three projects, the Green Prosperity Project, provides technical and financial assistance for projects in renewable energy and natural resource management to help raise rural household incomes. A second project, the Procurement Modernization Project, is designed to help the government of Indonesia develop a more efficient and effective process for the procurement of goods and services. MCC obligates compact funds when a compact enters into force, disbursing the funds over the 5 years of the compact. As of March 2015, MCC had expended $2.3 million of its $333 million commitment for the Green Prosperity Project and $6 million of its $50 million commitment for the Procurement Modernization Project. USAID provided about $19 million in trade capacity building assistance, among other things, to provide economic policy advisory services to the Indonesian government; strengthen key trade and investment institutions by contributing to a World Bank Fund; and strengthen the Indonesian Ministry of Trade’s capacity to analyze, negotiate, and implement bilateral and multilateral trade agreements. In addition, USAID officials told us that they are working to build and sustain a culture of accountability in Indonesia at the national and subnational levels by, for example, working with the U.S. Department of Justice to train investigators to support Indonesia’s Corruption Eradication Commission. However, according to agency officials, after consultations with Indonesian officials and others knowledgeable about the Indonesian economy, USAID stopped providing direct support for economic and trade policy issues. USAID officials also said that the Indonesian government did not view the support as a priority. Labor provided about $11 million in trade capacity building assistance to improve Indonesia’s compliance with labor standards and its competitiveness in global supply chains, to combat child labor, and to build the capacity of domestic labor organizations. In fiscal years 2009 through 2013, U.S. agencies provided about $32 million in trade capacity building assistance to Vietnam (see table 6). As table 6 shows, four agencies—USAID, the Departments of the Treasury (Treasury) and State (State), and the U.S. Trade and Development Agency (USTDA)—provided the majority of U.S. trade capacity building assistance to Vietnam in fiscal year 2009 through 2013. USAID provided approximately $20.4 million—about 64 percent of U.S. trade capacity building assistance to Vietnam during this period—to enhance the country’s economic governance. From 2001 through 2010, USAID’s Support for Trade Acceleration projects sought to modernize Vietnam’s commercial laws and legal system to help the country meet its bilateral trade agreement commitments and prepare it to join the World Trade Organization. In addition, the Vietnam Competitiveness Initiative, which began in 2003 and ended in 2013, sought to strengthen Vietnam’s regulatory system and regulatory framework and models for infrastructure development. The Provincial Competitiveness Index, which began in 2013 and is scheduled to end in 2016, assesses and reports on barriers to economic development and doing business in Vietnam. Moreover, USAID’s Governance for Inclusive Growth project—which began in 2013 and is scheduled to end in 2018—seeks to provide assistance relevant to Vietnam’s Trans-Pacific Partnership commitments, among other things. Finally, the Lower Mekong Initiative, encompassing Thailand, Cambodia, Laos, Burma, and Vietnam, supports, among many development efforts, reduction of the development gap between the more economically developed Association of Southeast Asian Nations countries and less developed countries, such as Vietnam, and also supports regional efforts toward economic integration. Treasury provided, through its Office of Technical Assistance, about $5.9 million in trade capacity assistance for several projects to improve Vietnam’s government operations. For example, OTA is currently assisting Vietnam with implementation of International Public Sector Accounting Standards. Previously, OTA provided assistance in the areas of banking supervision, strengthening of tax administration, and debt management. State provided about $2.6 million in trade capacity assistance, primarily for improving Vietnam’s customs and border control. State’s Export and Border Security Assistance program promotes border security and customs operations by providing training, equipment, vehicles, spare parts, infrastructure, travel to workshops and conferences, translations of key documents such as control lists, and other exchanges. State has provided equipment and training to Vietnamese officials in support of these efforts. USTDA provided about $2 million in U.S. trade capacity building assistance for projects to support potential U.S. investment opportunities. In 2014, USTDA provided $900,000 for a feasibility study—the largest USTDA-funded project in Vietnam that year—for an integrated telecommunications control center for the Ho Chi Minh City urban rail system. In August 2014, Vietnam became the second country to sign a memorandum of understanding with USTDA, under which USTDA will provide training and technical assistance to public procurement officials to implement Vietnam’s revised procurement law. In July 2015, USTDA signed two additional grant agreements with Vietnam for (1) technical assistance and training in support of Vietnam’s efforts to meet civil aviation safety standards and (2) a feasibility study to support the efforts of a Vietnamese private firm to develop an offshore wind power project. In addition to the contact named above, Emil Friberg (Assistant Director), Charles Culverwell, Fang He, Kira Self, Michael Simon, and Eddie W. Uyekawa made key contributions to this report. Benjamin A. Bolitzer, Lynn A. Cothern, Mark B. Dowling, Justin Fisher, Michael E. Hoffman, Reid Lowe, and Oziel A. Trevino provided technical assistance. | The United States and China have each sought to increase their economic engagement in Southeast Asia. U.S. agencies have identified Indonesia and Vietnam as important emerging U.S. partners that contribute to regional stability and prosperity. Indonesia has the world's 10th largest economy in terms of purchasing power, and Vietnam is one of the most dynamic economies in East Asia. Both the United States and China have established comprehensive partnerships with each country that are designed to enhance their bilateral cooperation in key areas. GAO was asked to examine the United States' and China's economic engagement in Southeast Asia. GAO issued a report on 10 Southeast Asian countries in August 2015. In this report, GAO presents case studies for two of these countries, Indonesia and Vietnam, providing greater detail about the United States' and China's trade and investment, competition, and actions to further economic engagement in the two countries. GAO analyzed publicly available economic data and documentation from 10 U.S. agencies and the Chinese government. The data that GAO reports have varying time periods because of the data sets' limited availability and differing contexts. GAO interviewed U.S., Indonesian, and Vietnamese officials and private sector representatives. This is the public version of a sensitive but unclassified report that is being issued concurrently. GAO is not making any recommendations in this report. Indonesia. In 2014, China's imports from, and exports to, Indonesia exceeded the United States' (see figure). The United States and China compete more often with other countries than with each other in goods exported to Indonesia and win contracts in different sectors. In contrast to the United States, which is not involved in a free trade agreement (FTA) with Indonesia, China is a party to a regional FTA that includes Indonesia and is negotiating the Regional Comprehensive Economic Partnership (RCEP) with Indonesia and 14 other countries. In fiscal years 2009 through 2014, U.S. agencies' financing for exports to, and investment in, Indonesia totaled about $2.5 billion, compared with at least $34 billion in Chinese financing, according to the Department of State. In 2007 through 2012, U.S. foreign direct investment (FDI) of $9.6 billion exceeded China's reported $2.7 billion, according to available data. Vietnam. In 2014, U.S. imports from Vietnam exceeded China's, while Chinese exports to Vietnam exceeded U.S. exports (see figure). As in Indonesia, the United States and China compete more often with other countries than with each other in goods exported to Vietnam and win contracts in different sectors. The United States and Vietnam are both participants in the proposed regional Trans-Pacific Partnership, while China and Vietnam are both parties to a regional FTA and the RCEP negotiations. In fiscal years 2009 through 2014, U.S. agencies' financing for exports to, and investment in, Vietnam totaled about $205 million, compared with at least $4.5 billion in Chinese financing, according to the Department of State. In 2007 through 2012, China's reported FDI of $1.2 billion was more than twice the United States' reported FDI of $472 million, according to available data. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
Real-estate taxes in the United States are levied by a number of different taxing authorities, including state and local governments, but mostly by local governments. Local governments, such as counties, can levy and collect taxes on behalf of smaller jurisdictions within their boundaries. For example, a county could collect real-estate taxes on behalf of a city within the county. In 2006, local-government property tax revenue was about $347 billion, compared to about $12 billion for state-government property tax revenue. Local governments can use property tax revenues to fund local services, such as road maintenance and law enforcement. In 2006, property taxes made up an average of 45 percent of general own-source revenue for local governments nationwide. According to the Congressional Research Service, the real-estate tax deduction was the most frequently itemized federal income tax deduction claimed by individual taxpayers from 1998 through 2006; the deduction was claimed on approximately 31 percent of all individual tax returns, and on about 87 percent of all returns with itemized deductions. The real- estate tax deduction provides a benefit to homeowners and also provides an indirect federal subsidy to local governments that levy this and other deductible taxes, since it decreases the net cost of the tax to taxpayers. Deductible real-estate taxes also may encourage local governments to impose higher taxes, which may allow them to provide more services than they otherwise would without the deduction. In 2006, individual taxpayers claimed about $156 billion in real-estate taxes as an itemized deduction. By allowing taxpayers to deduct qualified real-estate taxes, the federal government forfeits tax revenues that it could otherwise collect. Taxpayers can claim paid real-estate taxes as an itemized deduction on Schedule A of the federal income tax return for individuals. In addition, the Housing and Recovery Act signed July 30, 2008, included a provision that allowed non-itemizers to deduct up to $500 ($1,000 for joint filers) in real-estate taxes paid for tax year 2008. Taxpayers can also deduct paid real-estate taxes on other parts of the tax return, including as part of a deduction for a home office or in calculating net income from rental properties. For purposes of this report, references to the real-estate tax deduction mean the itemized deduction on Schedule A. Taxpayers may deduct state, local, and foreign real-property taxes from their federal tax returns if certain conditions are met. Taxpayers may only deduct real-estate property taxes paid or accrued in the taxable year. To be deductible, real-estate taxes must be imposed on an interest in real property. Taxes based on the value of property are known as ad valorem. Further, real-estate taxes are only deductible when they are levied for the general public welfare by the proper taxing authority at a like rate against all property in the jurisdiction. Real-estate-related charges for services are not deductible. Examples of such charges for services include unit fees for water usage or trash collection. In addition, taxpayers may not deduct taxes assessed against local benefits of a kind tending to increase the value of their property. Such local benefit taxes include assessments for streets, sidewalks, and similar improvements. However, local benefit taxes can be deductible if they are for the purpose of maintenance and repair of such benefits or related interest charges. IRS estimates that on income tax returns for 2001, all overstated deductions taken together resulted in $14 billion in tax loss. IRS estimated the amount of misreporting of deductions, but did not estimate the resulting tax loss for each deduction. However, according to data from IRS’s National Research Program, which is designed to measure individual taxpayer reporting compliance, in 2001 about 5.5 million taxpayers overstated their real-estate tax deductions, which resulted in a total overstatement of about $5.0 billion. The median overstatement was $436, or about 23 percent of the median claimed deduction amount of $1,915. We estimate that 38.8 million taxpayers claimed this deduction in 2001. While about 5.5 million taxpayers overstated their deductions, about 3.3 million understated their deductions. Taken as a whole, about 8.8 million taxpayers on average overstated their deductions by about $85 each, which resulted in a net total overstatement of about $2.5 billion. Taxpayers can overstate or understate their real-estate tax deductions in a number of ways. For example, they can overstate their deduction by not meeting such eligibility requirements as property ownership and payment during the tax year, or by inappropriately deducting the same taxes on multiple parts of the income tax return. Taxpayers can also overstate by claiming such real-estate tax-related amounts as local benefit taxes and itemized charges for services, which, as noted earlier, are not deductible. Taxpayers can also understate their real-estate deduction. For example, first-time homeowners may understate this deduction because they are not aware that they are entitled to claim it. Similarly, taxpayers who buy and sell a home in the same year could understate this deduction out of confusion over how much in taxes they can deduct for the old and new homes. Our 1993 report found that a majority of the local real-estate tax bills that we reviewed included nondeductible items, such as service charges, in addition to deductible real-estate taxes. Our report also indicated that local governments had increased their use of service charges in reaction to events that had reduced their revenues, such as local laws that restricted growth in real-estate taxes. By increasing user fees to finance services, local governments could keep their tax rates lower. We also reported that some local jurisdictions did not clearly indicate nondeductible items on real-estate tax bills and combined all types of payments (e.g., deductible and nondeductible real-estate taxes) into a total amount, which may lead taxpayers to claim this total amount on the bill as deductible and thereby overstate their deduction. Most taxpayers rely upon either paid preparers or tax software to file their tax returns. Recent estimates indicate that about 75 percent of taxpayers used either a paid preparer (59 percent) or tax software (16 percent) to file their 2007 taxes. Any evaluation of the factors that contribute to taxpayers overstating the real-estate tax deduction would need to take paid preparers and tax software into consideration. To describe factors that contribute to the inclusion of nondeductible charges in real-estate tax deductions, we conducted a number of analyses and spoke with various external stakeholders, as follows. To determine what information local governments report on real-estate tax bills relating to federal deductibility, we surveyed a generalizable sample of over 1,700 local governments. We also reviewed about 500 local-government real-estate tax bills provided to us by survey respondents. We also interviewed officials with organizations representing local governments, including the National Association of Counties; the National Association of County Collectors, Treasurers, and Financial Officers; and the Government Finance Officers Association. To determine what mortgage servicers report on mortgage documents, we interviewed representatives from the mortgage industry from the Consumer Mortgage Coalition, the Mortgage Bankers Association, and the three largest mortgage servicing companies in 2007. We reviewed three IRS publications for tax year 2007 that provide guidance to individual taxpayers claiming the real-estate tax deduction as an itemized deduction on their federal income tax returns: the instructions for IRS Form 1040, Schedule A, the form and schedule where taxpayers can deduct real-estate taxes and other items from their taxable income; IRS Publication 17, which provides information for individuals on general rules for filing a federal income tax return; and IRS Publication 530, a guide for homeowners. We checked whether each of these publications explained the factors that taxpayers need to consider in determining deductibility and guided taxpayers on where they could obtain additional information necessary for determining deductibility. To determine the extent that tax-preparation software and paid professional tax preparers assisted taxpayers in only claiming deductible real-estate taxes, we reviewed online software versions of the three largest tax-preparation software programs in 2008—TaxAct, TaxCut, and TurboTax—and interviewed representatives from those three companies and representatives from the National Association of Enrolled Agents. We used the results of our survey of over 1,700 governments to determine the extent to which local governments send real-estate tax bills with certain generally nondeductible charges. To get an indication of the extent to which taxpayers may be overstating their real-estate tax deductions by including such nondeductible charges, we conducted case studies on five large local governments, collecting and analyzing tax data from them and IRS. Specifically, we worked with IRS to determine which charges on the five local governments’ tax bills were likely deductible. While conducting these five case studies of taxpayer noncompliance in claiming the real- estate tax deduction, we identified challenges in determining what charges qualify as deductible real-estate taxes. Then, to the extent possible, for two jurisdictions we compared the amounts that were likely deductible to the amounts the taxpayers claimed as deductions on Schedule A of their 2006 federal tax returns. Appendix III provides details on the methodology for this case study, including jurisdiction selection. To describe the extent that IRS examinations of the real-estate tax deduction focus on potential overstatements due to taxpayer inclusion of nondeductible charges, we reviewed IRS guidance for examiners related to the real-estate tax deduction, and interviewed IRS examiners about the standard procedures and methods they use for auditing this deduction. We reviewed guidance in the Internal Revenue Manual, which serves as the handbook for IRS examiners, to determine how clearly it instructs examiners to verify the deductibility of charges on real-estate bills when auditing this deduction. Our interviews with IRS examiners focused on the extent to which examiners determine the deductibility of charges on real- estate bills when auditing this deduction, challenges faced by examiners auditing this deduction, and whether examiners have information about local jurisdictions with large nondeductible charges on their real-estate tax bills. The examiners we interviewed included examiners and managers based in IRS offices across the United States. To assess possible options for improving voluntary taxpayer compliance with the real-estate tax deduction, we interviewed members of organizations representing local governments and IRS officials about potential options. We also identified potential options along with their benefits and trade-offs based on our other work for this report. We conducted this performance audit from October 2007 through May 2009, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Local governments generally do not inform taxpayers what charges on real-estate tax bills qualify as deductible real-estate taxes, which creates a challenge for taxpayers attempting to determine what they can deduct. Groups representing local governments told us that local governments do not identify on real-estate tax bills which charges are deductible, and our review of almost 500 real-estate tax bills supplied by local governments supports this. In our review, we found no instances where the local government indicated on the bill what amounts were deductible for federal real-estate tax purposes. Furthermore, while IRS requires various entities to provide information about relevant federal tax items to both taxpayers and IRS on statements known as information returns, local governments are not required to provide information returns on real-estate taxes paid. Local government groups told us that local governments do not identify what taxes are deductible because they cannot easily determine whether their charges meet federal deductibility requirements. They said that local government tax collectors do not have the background or expertise to determine what items are deductible according to federal income-tax law and may lack information necessary for making such determinations for charges billed on behalf of another taxing jurisdiction. As a result, local governments did not want to make such determinations. Taxpayers with mortgages may also receive information about real-estate tax bill charges paid on their behalf by mortgage servicers, but this information generally does not identify what taxes can be deducted. To protect a mortgage holder’s interest in a mortgaged property, mortgage servicers often collect funds from property owners whose mortgages they service (borrowers) and hold them in escrow accounts. They then draw from the funds to pay real-estate taxes and related charges on the properties as they are due. Mortgage servicers provide borrowers with annual statements summarizing these and other deposits and withdrawals of escrow account funds. In addition, mortgage servicers have the option of reporting such escrow payments on information returns relating to paid mortgage interest, but can choose to report other information instead. Mortgage industry representatives we spoke with stated that when reporting escrow payments, mortgage servicers usually report the total amount paid at any given time to local governments from escrow accounts and do not itemize the specific types of charges paid for, regardless of the statement used. As a result, any nondeductible charges paid for would be embedded in this payment total and reported as “property taxes” or “real- estate taxes” on mortgage servicer documents, including IRS forms. According to mortgage industry representatives, mortgage servicers only report a total because most only track and receive information on the total payment amount due. Mortgage servicers are interested in total amounts because local governments can place a lien on a mortgaged property if all billed charges are not paid. In addition, not all mortgage servicers receive detailed information about charges. Our survey of local governments on real-estate tax billing practices showed that an estimated 43 percent of local governments provide mortgage companies with only total amounts owed for requested properties. That annual mortgage statements report only totals is significant because not all property owners receive tax bills. Based on responses to our local government survey, an estimated 25 percent of local governments do not send property owners a copy of their tax bill if the taxpayer escrows their taxes through a mortgage company. Even though real-estate tax bills do not indicate what charges are deductible, tax bills can contain information on the types of charges assessed on a property, which is a starting point for taxpayers in determining what they can deduct. In the absence of information identifying deductible real-estate taxes, determining whether certain amounts on the tax bills are deductible can be complex and require significant effort. Taxpayers generally cannot be assured that their real-estate tax bill has enough information to determine which of the charges listed are deductible for federal purposes. Deductible real-estate taxes are any state, local, or foreign taxes on real property levied for the general public welfare by the proper taxing authority at a like rate against all property in the jurisdiction. Charges for services and charges for improvements tending to increase the value of one’s property are generally not deductible. However, even if a real-estate tax bill labels a charge as a “tax” or “for services,” the designation given by a local government does not determine whether a charge on a real-estate tax bill is deductible. For example, a charge that is labeled a tax on a local real-estate tax bill, but is not used for public or governmental purposes such as police or fire protection, likely would not be deductible; whereas a charge that is labeled a fee could be considered a deductible tax if the charge is imposed at a uniform rate based on the value of the real estate and is used for the general public welfare. Complicating the matter is that local benefit taxes, which are generally not deductible, can be deductible if the revenue raised is used to maintain or repair existing improvements. Figure 1 depicts some of the questions that taxpayers need to be able to answer for each real-estate-tax- related charge they wish to deduct. Taxpayers who are unsure how to answer these questions (as well as others) with respect to a given charge cannot be assured of the charge’s deductibility. Because determining what qualifies as deductible can be complex, we asked IRS’s Office of Chief Counsel to help us determine the deductibility of amounts on tax bills in five large local governments as part of case studies on taxpayer compliance with the real-estate tax deduction. We asked attorneys in IRS’s Office of Chief Counsel what information they would need to determine whether charges that appear on real-estate tax bills in the jurisdictions were deductible. IRS’s Office of Chief Counsel indicated that it would need information on the questions indicated in figure 2. To provide IRS with this information, we searched local government Web sites for information on each charge that appeared on tax bills. We also interviewed local government officials, collected and analyzed additional documentation related to the charges, and identified sections of local statutes that provided the authority to impose the charges on the local tax bills. We summarized this information in summary documents that totaled over 120 pages across the five selected local governments. Despite this level of effort, the information was not sufficient to allow IRS to make a judgment as to the deductibility of all of the charges in the five selected jurisdictions. While local government officials we spoke with provided us with significant support in our research, some of the information we asked for was either unavailable or impractical to obtain due to format or volume. The main challenge we faced was that each of the five local governments had over 100 taxing districts—cities, townships, school districts, special districts, etc.—and gathering detailed information for each district, such as how each district calculates the rate it charges, was difficult and time-consuming. As a result, IRS attorneys were not able to make determinations on some charges in three of the five jurisdictions. Because individual real-estate tax bills in these jurisdictions would likely include only a subset of the amounts we researched, taxpayers in these jurisdictions would not necessarily need to apply the same total level of effort that we did. However, they would still face similar challenges in determining whether the amounts on their tax bills qualified as deductible. For example, one county official told us that not all charges are itemized on their tax bills and as a result, it is nearly impossible for a taxpayer in her county to find out the nature and purpose of those charges for which they are assessed. IRS instructions and guidance for taxpayers on claiming the real-estate tax deduction explain generally what taxpayers can deduct, but lack more specific information on how to comply. IRS instructions for claiming the real-estate tax deduction on the federal income-tax return for individuals explain that real-estate taxes are deductible if they are based on the value of property, they are assessed uniformly on property throughout the jurisdiction, and proceeds are used for general governmental purposes. The instructions also indicate that per-unit charges for services and charges for improvements that tend to increase the value of one’s property are generally not deductible. The IRS general guide for individuals filing an income tax return and the IRS guide for first-time homeowners similarly explain what taxpayers can deduct, and also provide examples of nondeductible charges for services and local benefit taxes. However, these three IRS publications do not inform taxpayers that they should check both real-estate tax bills and available local government resources with information about the nature and purpose of specific charges. While the two IRS guides alert taxpayers that they should check real-estate taxes bills, IRS’s instructions for deducting real-estate taxes are silent on what taxpayers need to check. None of the publications inform taxpayers that they may also need to consult local government Web sites, pamphlets, or other available documents with information about the nature and purpose of specific charges to determine what amounts qualify as deductible real-estate taxes. Without specific instruction to do otherwise, taxpayers could believe that they are getting deductible amounts from their mortgage servicer. Searching for more information may not be conclusive for all charges, but may be sufficient for determining the deductibility of many charges, as we found while examining charges in five local governments with IRS. Similarly, even though some taxpayers may be unable to determine the deductibility of a few charges on their tax bills after consulting available local government resources, they likely need such information on other charges to comply with requirements of the real-estate tax deduction. Taxpayers need to know that they may need to consult available local government resources because more information may be required before they can determine which charges they can deduct from their tax bill. Tax-preparation software and assistance provided by paid tax preparers may not be sufficient to help ensure that taxpayers only deduct qualified real-estate taxes. At the time of our review, two of the three most frequently used tax-preparation software programs for 2008—TaxAct, TaxCut, and TurboTax—did not alert taxpayers to the fact that not all charges on real-estate tax bills may qualify as deductible real-estate taxes. The sections of these two programs where users entered real- estate taxes paid lacked an alert informing users that not all charges that appear on a real-estate tax bill may qualify as deductible real-estate taxes. While all three of the programs contained information about what qualified as deductible real-estate taxes in various screens, users had to proactively click on buttons to access these sections to learn that some charges on their tax bill may not have been deductible. One software-program representative indicated that alerts need to be carefully tailored to have the intended effect. He cautioned that too much information can actually have undesirable effects that do not lead to improved compliance. Specifically, to the extent that they are not relevant to taxpayers whose bills do not contain nondeductible items, overly broad or irrelevant alerts can result in taxpayers reading less, thereby creating confusion, causing errors to be made, and unnecessarily increasing taxpayer burden by increasing the time and complexity involved in taxpayers preparing their returns. Nevertheless, software-program representatives we spoke with were receptive to potential improvements that could be made to their software programs. Prior to our review, none of the three largest software programs contained an alert informing users that not all items on real-estate tax bills may be deductible. In addition, one of the three programs did not discuss the fact that deductible real-estate taxes are based on the assessed value of property and that charges for services and local benefit taxes are generally not deductible. In response to our discussions with them on these issues, all three tax software programs made changes to their programs. One program added an alert to users indicating that not all charges on real-estate tax bills may be deductible and the other two programs added information about what qualifies as real-estate taxes or made such information more prominent in the guidance accessible from their sections on real-estate taxes. Paid preparers we spoke with indicated that they invested only limited time and energy making sure that taxpayers included only qualified real- estate taxes in their deductions. Most taxpayers do not understand that some charges assessed against a property may not be deductible, and often only provide preparers with mortgage interest statements or cancelled checks to local governments that contain only total payment amounts, making it difficult for the preparers to identify potentially nondeductible charges. Some preparers indicated that from their experience such charges are relatively small, and may have negligible impacts on a taxpayer’s tax liability, especially after other parts of the tax return are considered. As a result, even if they thought that clients may be claiming nondeductible charges, they often did not consider identifying such charges to be worth the effort. The paid preparers that we spoke with also indicated that more information from local governments or IRS on what taxes are deductible would be helpful in improving taxpayer compliance with the deduction. As mentioned earlier, deductible real-estate taxes are generally ad valorem or based on the assessed value of property. We used the ad- valorem/non-ad-valorem distinction as a rough proxy to indicate potential deductibility in our survey of local governments’ real-estate billing practices. The ad-valorem/non-ad-valorem distinction is not a perfect indicator of deductibility, since, under certain circumstances, some ad- valorem charges could be nondeductible and some non-ad-valorem charges could be deductible. However, based on the information we provided, IRS’s Office of Chief Counsel determined that all non-ad- valorem charges in our case study jurisdictions were not deductible. We estimate that almost half of local governments nationwide included charges on their real-estate tax bills that were generally not deductible, based on responses to our survey. We surveyed a sample of over 1,700 local governments identified as collecting real-estate taxes and asked them whether their real-estate tax bills included non-ad-valorem charges, that is, charges that are not based on the value of property and therefore generally not deductible. Examples of such charges include fees for trash and garbage pickup. Based on responses, we estimate that 45 percent of local governments nationwide included such charges on their tax bills. The property taxes collected by local governments with non-ad-valorem charges on their bills represented an estimated 72 percent of the property taxes collected by local governments nationwide. Of the local governments surveyed that included non-ad-valorem charges on their bills, only 22 percent reported that they label such charges as non- ad valorem. As a result, even if taxpayers owning real estate in the other 78 percent of these locations review their tax bills, they may not be able to identify which charges, if any, are non-ad valorem and likely nondeductible. In identifying how much taxpayers may have overstated real-estate tax deductions from individual taxpayers claiming nondeductible charges, we encountered data limitations that constrained our analysis and made it impossible to develop nationwide estimates of these overstatements. Some of the main limitations follow: The jurisdictions we selected did not maintain their tax data in a way that allowed us to itemize all of the charges on individuals’ tax bills. They also did not always maintain information on those charges necessary for IRS and us to determine deductibility. As a result, we were not able to account for all potentially nondeductible ad-valorem charges. Similarly to the approach we took in our survey of local governments, we categorized all ad-valorem charges as deductible and all non-ad-valorem charges as nondeductible in identifying how much taxpayers overstated their real- estate tax deductions. The selected jurisdictions also did not track the real-estate tax liabilities and payments by individuals’ Social Security number (SSN), which is the unique identifier used in the IRS tax return data for each taxpayer. Consequently, we used available information—name, address, and zip code—to calculate for each taxpayer the total amount billed by the local government and compare the amount billed to the amount claimed as a real-estate tax deduction on Schedule A of the taxpayer’s return. This process was very time- and resource-intensive. We could not explicitly account for other income tax deductions or adjustments to income that could influence the amount taxpayers are eligible to claim on the Schedule A, such as the home-office deduction and rental real-estate income. IRS did not have information readily available on how much real-estate taxes taxpayers in our case-study jurisdictions claimed as a home-office deduction, nor did it have information on the locations of other rental real-estate properties owned by a taxpayer, which could have been in multiple jurisdictions. We aimed to mitigate these issues by only analyzing records where (1) the amount claimed in the IRS data was roughly equivalent to the total amount billed to the taxpayer in the local government data, or (2) the amount claimed was less than 15 percent greater than the total billed amount. Because of these limitations, we were able to match only 42 percent of the individuals (195,432 of 463,066) who itemized their real-estate tax deductions on their tax returns to the data we received from two counties, as table 1 shows (see app. III for a more detailed discussion of our methodology). The counties—Alameda County, California and Hennepin County, Minnesota—were among the largest taxing jurisdictions in the United States that had non-ad-valorem charges, such as fees for services, special assessments, and special district charges, on their real-estate tax bills in 2006. Table 2 shows that of the 195,423 matched taxpayer records in the two counties, 56 percent, or 109,040 individuals had non-ad-valorem charges on their local bills. However, over 99 percent of the Alameda County bills had non-ad-valorem charges compared to only about 10 percent of the Hennepin County bills. Our analysis of the 109,040 individuals in the two counties who had non- ad-valorem charges on their bills that could be matched to IRS data indicates that almost 42,000 (38.3 percent) collectively overstated their real-estate tax deductions by at least $22.5 million (i.e., “very likely overstated”) for tax year 2006. When one includes over 37,000 taxpayers who had non-ad-valorem charges and overstated their deductions up to 15 percent greater than their total amounts billed in 2006 (i.e., “likely overstated”) the amount of potential overstatement increases to $46.2 million. Table 3 summarizes the results on overstated deductions from claiming nondeductible charges for the two counties. While 72.4 percent of taxpayers (78,916 of 109,040) with non-ad-valorem charges that we could match to tax returns overstated their real-estate tax deduction, these overstated amounts on average only involved amounts in the hundreds of dollars. According to our analysis, the median “very likely” overstatement was $414 in Alameda County and $241 in Hennepin County. The median “likely” overstatement was $493 for Alameda County and $179 for Hennepin County. It is important to recognize that these overstated deduction amounts are not the tax revenue loss. The tax revenue loss would be much less and depend, in part, on the marginal tax rates of the individuals who overstated their deductions as well as other factors that we did not have the data or resources to model appropriately. Those factors would include the amount of real-estate taxes and local-benefit taxes that should be allocated to other schedules on the tax return and other attributes such as the amount of refundable and nonrefundable credits. As a result, while many taxpayers are erring in claiming nondeductible charges, the small tax consequences of such overstatements may not justify the cost of IRS enforcement efforts to pursue just these claims. IRS’s guidance to examiners does not require them to check documentation to verify that the entire real-estate tax deduction amount claimed on Schedule A of Form 1040 is deductible. Such documentation would indicate whether taxpayers claim nondeductible charges. Rather, IRS’s guidance gives examiners discretion on which documentation to request from taxpayers to verify the real-estate tax deduction. Examiners are authorized to request copies of real-estate tax bills, verification of legal property ownership, copies of cancelled checks or receipts, copies of settlement statements, and verification and an explanation for any special assessments deducted. Because of the discretion in the guidance, examiners are not required to request or examine each form of documentation. The guidance also does not direct examiners to look for all potentially nondeductible charges in real-estate tax bills. Some IRS examiners we interviewed considered Form 1098 for mortgage interest paid to be appropriate documentation if the taxpayer failed to provide a real-estate tax bill because this form could demonstrate that the taxpayer paid the taxes through an escrow account set up with the mortgage company. However, as noted earlier, Form 1098 shows payments to local governments for all real-estate tax-related charges billed, including any nondeductible charges. In other words, Form 1098 does not conclusively demonstrate deductibility. Rather than focusing on the nature of charges claimed, IRS examinations of real-estate tax deductions focus on other issues, such as evidence that the taxpayer actually owned the property and paid the real-estate taxes claimed during the year in question. IRS examiners told us that they focus on proof of ownership and payment because, in their experience, taxpayer noncompliance with these requirements could result in larger overstatements. For example, a taxpayer residing in the home owned by his or her parent(s) could incorrectly claim the real-estate tax deduction for the property. It is also common for first-time homebuyers to improperly claim the full amount of real-estate taxes paid for the tax year, even though the seller had paid a portion of these taxes. Examinations of the real-estate tax deduction usually take place as part of a broader examination of inconsistent claims across the individual tax return. In examining deductions on the Schedule A, IRS examiners have found cases in which some taxpayers incorrectly include real-estate taxes as personal-property taxes on Schedule A, sometimes deducting the same tax charges as both personal-property taxes and real-estate taxes. Furthermore, IRS examiners might find claims on other parts of the return that prompt them to check the real-estate tax claimed on Schedule A, or find overstated real-estate tax deductions on Schedule A that indicate noncompliance elsewhere on the return. For instance, a taxpayer might claim the real-estate tax deduction for multiple properties on Schedule A, but fail to report any rental income earned from these properties on the Schedule E form, which is used to report income or loss from rental income. Also, a taxpayer might claim the total amount of real-estate taxes paid on Schedule A, but improperly claim these taxes again as part of the business expense deductions on the Schedule E or Schedule C forms, or both. IRS guidance instructs taxpayers to deduct only real- estate taxes paid for their private residences on Schedule A, and to dedu any real-estate taxes paid on rental properties on Schedule E. If taxpaye use a part of their private residence as the principal place for conducting business, they should divide the total real-estate taxes paid on the property accordingly, with the portion of real-estate taxes paid for the business deducted on Schedule C. As noted earlier, the format and the level of detail about charges on local real-estate bills vary greatly across local governments. IRS examiners told us that they do not focus on the deductibility of most real-estate charges when auditing real-estate tax deductions because determining deductibility from looking at such bills can take significant time and effort. They also said that when they detect apparent nondeductible charges claimed in the real-estate tax deduction, the amounts are usually small. As a result, the examiners we interviewed generally contended that determining the deductibility of every charge on a bill could be an inefficient use of IRS resources. Examiners reasoned that the amount of nondeductible charges on a real-estate tax bill would have to be quite high to justify an examination and adjustment of tax liability. IRS does not have information about which local governments are likely to have large nondeductible charges on their real-estate tax bills. IRS examiners also told us that if they had this information, they could use it to target any examination of the real-estate tax deduction toward large deductions claimed by taxpayers in these specific jurisdictions. Several examiners told us that they look for large nondeductible charges that are commonly claimed as real-estate taxes, but they only know about these nondeductible items from personal experience. For example, IRS examiners located in Florida and California indicated that some taxpayers attempt to improperly deduct large homeowners’ association fees as part of the real-estate tax deduction. Absent information about potentially nondeductible charges, some examiners told us that when they are examining a real-estate tax deduction, they might research taxpayer information accessible from the respective county assessor’s Web site, such as information about real-estate bill charges, or from other databases, such as how many properties a taxpayer owns and the amount of taxes paid for each property. Various options could help address one or more of the identified problems that make it hard for individual taxpayers to comply by only claiming deductible charges when computing their real-estate tax deduction, and improve IRS’s ability to check compliance. Given the general difficulty in determining deductibility, one option would be to change the tax code. Changing the tax code could affect both taxpayers who overstate and those who understate their deductions. Depending on the public policy goals envisioned for the real-estate tax deduction, policymakers may wish to consider changes to balance achieving those goals and make it simpler for individuals to determine how much of their total amount for local charges can be deducted. Changing the law to help taxpayers correctly claim the deduction could be done in different ways. However, assessing such changes to the law and their effects was beyond the scope of this review. Thus, we have not included nor will we further discuss in this report an option for changing the tax code. Assuming no statutory changes are made to clarify how much of local charges on real-estate tax bills can be deducted, table 4 lists some broad options under three areas involving improved information, guidance, and enforcement to address the problems. The options we discuss are concepts rather than proposals with details on implementation and likely effects. These options would likely affect both those taxpayers who overstate and those who understate their real-estate tax deductions. A combination of these options would be needed to address the four main problems. In considering the options, it is important to know how many individual taxpayers claim nondeductible charges from real-estate tax bills and how much federal revenue is lost. Such knowledge could signal how urgently solutions are needed. However, the extent of taxpayer noncompliance and related federal revenue loss is not known, and we could not estimate this with the resources available for our review. If many taxpayers overstate the deduction and the aggregate revenue loss is high enough, pursuing options to reduce noncompliance would be more important. Conversely, fewer taxpayers making errors and lower revenue losses might lead to a decision to not pursue any options or only options that have minimal costs and burdens. Ultimately, policymakers in concert with tax administrators will have to judge whether concerns about noncompliance justify the extent to which options, including those on which we make recommendations, should be pursued to help taxpayers comply. Compliance could be measured in different ways, which could yield better information at increasing cost. For example, IRS has research programs that are designed to measure compliance. One option is to modify IRS’s National Research Program (NRP) studies that IRS planned to launch in October 2007, which were designed to annually examine compliance on about 13,000 individual tax returns. NRP staff could begin to collect information through this annual study to compute how much of the overall amount of noncompliance with claiming the real-estate tax deduction is caused by taxpayers claiming nondeductible charges. If pursued, IRS would need to consider how much additional time and money to invest in its annual research to measure taxpayer compliance in claiming only deductible charges in the real-estate tax deduction. IRS also could consider focusing its compliance efforts on local governments that put large nondeductible charges on real-estate tax bills. Lacking information on the potential compliance gains compared to potential costs and burdens makes it difficult to assess whether most options are justified. Even so, some of these options could improve compliance with the real-estate tax deduction while generating lower costs and burdens for IRS and third parties. Although we did not measure the benefits and costs, the following discussion describes key trade-offs to be considered for each option, such as burdens on IRS, local governments, and other third parties, as well as implementation barriers. Taxpayers are responsible for determining which charges are deductible. The burden to be fully compliant can be significant, depending on how many charges are on the real-estate tax bill, how quickly information can be accessed on how the charge is computed and used, and how long it takes taxpayers to use that information to determine deductibility. In the absence of data, a simple illustration can provide context, recognizing that taxpayer experiences would vary widely. To illustrate, if we use an IRS estimate that roughly 43 million taxpayers claimed the real-estate tax deduction in 2006, and assume that each taxpayer spent only 1 hour to access and use information about charges on the bill to make determinations about deductibility, then a total of 43 million taxpayer hours would be used to calculate this deduction. If we further assume that the value of a taxpayer’s time averaged $30 per hour, which is the figure used by the Office of Management and Budget, the value of this compliance burden on taxpayers for the real-estate tax deduction would total $1.29 billion. The options for providing information about the local charges generally would lessen the burden on individual taxpayers while likely increasing compliance levels. However, depending on the option, the burden would shift to local governments. Although the local-government representatives we interviewed did not have data on the costs for any option and said that the costs and burdens could vary widely across local governments, they had views on the relative burdens for each option. Figure 3 provides a rough depiction of this burden shifting. Given the complexity of determining the federal deductibility of local charges, a problem we found was that taxpayers are not told how much of the total amount of charges on the local bill can be deducted. Two options for reporting information on deductible charges are (1) information reporting, or (2) changing the local real-estate tax bills. Information reporting on deductible amounts Requiring information reporting in which local governments determine in their opinion which charges are federally deductible and report the deductible amount to their taxpayers and to IRS would provide very helpful information related to deductibility. A barrier to any information reporting is that 19 of the 20 local-government tax collectors that we interviewed did not maintain records by a unique taxpayer identifier, such as the SSN. For IRS to check compliance in claiming only deductible charges, IRS would need an unambiguous way of matching the local data to the federal data, which traditionally relies on the SSN. Local- government representatives said significant challenges could arise in collecting and providing SSNs to IRS, given concerns about privacy, and possible needed changes to state laws. Local-government representatives that we interviewed viewed information reporting as having the highest costs and burdens of the options that we discussed for providing additional information to taxpayers. One example of a potentially high cost that local governments would incur is the cost associated with computer reprogramming to enable them to report the information. One way to reduce the costs for many local governments would be to require information reporting for larger local governments only or for those that have nondeductible charges on their real estate bills. Requiring information reporting only selectively would eliminate the cost for some local governments, but would not reduce the costs for those that still have to report to IRS and would not eliminate concerns about providing the SSN. Reporting deductible amounts on local real-estate tax bills Another option for providing taxpayers with information about deductibility would be to report the deductible amounts on the local government bills provided to taxpayers only. This would eliminate the concerns about collecting and providing SSNs as well as the costs of reporting to IRS. Local-government representatives we interviewed said that their costs still could be high if major changes are required to local computer systems and bills. For example, they might have to regroup and to subtotal charges based on deductibility. Furthermore, not all local governments provide a copy of their bills to taxpayers who pay their real- estate taxes through mortgage escrow accounts. These taxpayers would need to receive an informational copy of their bills or be alerted to the nondeductible charges in some other manner. Whether providing information on deductibility through information reporting or changing local bills, a major concern for local governments was determining deductibility. Local-government representatives expressed concerns about local governments protecting themselves from legal challenges over what is deductible, given the judgment necessary to determine deductibility. Local-government representatives and officials told us that local governments do not want to become experts in the federal tax code and would oppose making any determination of deductibility without assistance. Given local governments’ concern about determining deductibility, local governments could provide information to IRS about the types of charges on their bills and IRS could use that information to help local governments determine deductibility, reducing their burden and concern somewhat while increasing costs to IRS. Even if IRS took on the responsibility of determining the federal deductibility of local government real-estate charges, local governments probably would still need to be involved. The IRS officials that we spoke with for the purpose of this job did not have extensive knowledge about charges on local tax bills. Local-government representatives indicated that local governments’ willingness to work with IRS would greatly depend on IRS’s approach. After determining deductibility, IRS and local governments could pursue cost-effective strategies for making information on deductibility available to taxpayers, such as posting this information on their respective Web sites. IRS’s processing costs could be large if tens of thousands of local governments reported on many types of specific charges. Even if IRS had some uniform format for local governments to use in reporting, the amount of information to be processed likely would be voluminous and diverse given variation in local charges. IRS also would incur costs to analyze the information and work with local governments that appear to have nondeductible charges. These IRS costs would vary with the breadth and depth of involvement with the selected local governments. IRS could mitigate costs if it could identify jurisdictions with significant dollar amounts of nondeductible charges, and work only with those jurisdictions. In addition to not being given information on which local charges were deductible, another problem we found was that taxpayers do not receive enough information about the charges on real-estate tax bills to help them determine how much to deduct. Knowing about the basis for the charges, how the charges were used, and whether they applied across the locality are key pieces of information that could help taxpayers determine deductibility. We found that some local governments provided some of this information on their real-estate tax bills but many did not. An alternative for informing taxpayers about local charges would be for local governments to identify which charges on its tax bills are ad valorem and non-ad valorem. Our work with IRS attorneys on the charges on tax bills in five large counties indicated to us that non-ad-valorem charges usually would be nondeductible because they generally are not applied at a uniform rate across a locality. Similarly, many ad-valorem charges would be deductible but with exceptions, such as when charges were not applied at a uniform rate across the locality or when they generated “local benefits” for the taxpayer. Because not all ad-valorem charges are deductible and not all non-ad-valorem charges are nondeductible, taxpayers still would be required to make the determinations. If taxpayers claimed only the ad-valorem charges listed on their bills, compliance would likely improve for those who otherwise would deduct the full bill amount that includes nondeductible charges. Local governments that do not currently differentiate ad valorem from non–ad valorem would incur costs that would vary with how much the bill needs to change and the space available to report the information. However, representatives of local governments with whom we spoke saw this option as less burdensome than determining and reporting the deductible amounts. A final option involving information on local tax bills could generate the lowest costs but would provide less information for taxpayers than other options related to changing local tax bills. That option is for local governments to place disclaimers on real-estate tax bills to alert taxpayers that some charges may not be deductible for federal income tax purposes. Local-government representatives said that the direct costs would be minimal to the extent that the disclaimer was brief and that space was available on the bill. Adding pages or inserts to the bill would increase printing, handling, and mailing costs. Because the disclaimers would not provide any information to taxpayers to help them determine deductibility, some taxpayers would likely seek that information by calling the local governments. Handling a large volume of calls could be costly for local governments. Even if taxpayers were to receive more information about the local charges on their real estate bills, we found that taxpayers may not receive enough guidance from IRS and third parties to help them determine how much to deduct and to alert them to the presence of nondeductible charges. For example, although IRS’s guidance to taxpayers discusses what qualifies as deductible real-estate taxes, we found a few areas in which it was incomplete given that determining deductibility can be complex. Furthermore, third parties in the mortgage and tax-preparation industries did not regularly alert taxpayers through disclaimers and other information that not all charges may be deductible. Options for helping taxpayers to apply information in order to determine which local charges are deductible include (a) enhancing IRS’s existing guidance to individual taxpayers, and (b) having IRS engage in outreach to mortgage-servicer and tax-preparation industries about nondeductible charges and about any enhanced IRS guidance. Although IRS’s guidance publications provided basic information to taxpayers about what could be deducted as a real-estate tax and the types of charges that could not be deducted, we found areas that, if improved, might help some taxpayers to comply. Those include placing a stronger disclaimer early in the guidance to alert taxpayers about the need to check whether all charges on their real-estate tax bill are deductible; across the IRS publications we reviewed, such an explicit disclaimer either was made near the end of the guidance or not at all; clarifying that a real-estate tax bill may not be sufficient evidence of deductibility if the bill includes nondeductible charges that are not clearly stated; our work showed that some bills could not be relied upon to prove deductibility but we found nothing that explicitly told taxpayers that they could not always rely on the bills as such evidence; and providing information or a worksheet on possible steps to take to obtain information about whether bills include nondeductible charges and what those charges are; to the extent that taxpayers may not know where to find the information necessary to determine whether any charges on their local bills are nondeductible, the guidance could suggest steps to help taxpayers start to get the necessary information. The cost of IRS enhancing its guidance would vary based on the extent that IRS made changes in its written publications and electronic media, but these changes would not necessarily be costly to make. Taxpayer compliance could improve for those who have nondeductible charges on their local bills but who are not aware about the nondeductible charges and how to find them. Taxpayers also could spend some time and effort to discover whether any of the local charges are nondeductible but that time and effort would largely be a onetime investment unless the local government changes the charges on the real estate bills from year to year. IRS could conduct outreach to two types of third parties that provide information or offer assistance to individual taxpayers about the real- estate tax deduction. First, IRS could engage mortgage servicers in how they might alert taxpayers that real-estate payments made through escrow accounts could include nondeductible charges, including those reported on IRS forms. The trade-offs discussed for putting disclaimers on local real-estate tax bills would apply here as well. Mortgage servicers would likely use a generic disclaimer on all escrow statements because currently the servicers do not receive information about nondeductible local charges that appear on the bills and usually only receive total amounts to be paid. However, if mortgage servicers happen to receive itemized information about local charges from local governments, they could report these details on escrow statements to inform taxpayers who may not receive a copy of their local real estate bill because their local charges are paid through the escrow. Doing so would generate some computing costs for the servicers. Also, IRS could reach out to the tax-preparation industry—those who develop tax-preparation software as well to those who help individuals prepare their tax returns. The goals would be to ensure that those who provide guidance to taxpayers are alerted to the potential presence of nondeductible charges on real-estate tax bills and to ensure that they understand IRS’s guidance, particularly if it is enhanced. IRS also could solicit ideas on ways to improve guidance to help individual taxpayers. The tax-preparation software companies could incur some costs if conversations with IRS result in revisions to their software. Other types of tax preparers, such as enrolled agents, would likely not incur many monetary costs but may experience resistance from individual taxpayers who do not wish to comply. If the implementation barriers to information reporting on this deduction were resolved and local governments were required to report information on real-estate taxes to IRS, IRS could expand its existing computer- matching system to include the real-estate tax deduction. If this option were chosen, IRS would incur the costs of processing and checking the adequacy of the local data, developing matching criteria, generating notices to taxpayers when significant matching discrepancies arise, and providing resources to interact with taxpayers who respond to the notices. However, such matching programs have proven to be effective tools for addressing compliance. IRS already conducts tens of thousands of examinations annually that check compliance in claiming the real-estate tax deduction. IRS could do more examinations of this deduction. However, the costs involved may not be justified given the current lack of information about the extent of noncompliance caused by claiming nondeductible charges and the associated tax loss. Given that IRS is already doing so many examinations that audit the real- estate tax deduction, an option that could be less burdensome for IRS would be to ensure that its examiners know about this issue of nondeductible local charges whenever they are assigned to audit the deduction. Specifically, IRS could require its examiners to verify the deductibility of real-estate charges claimed whenever the examiners are examining a real-estate tax deduction with potentially large, unusual, or questionable nondeductible items. Currently, examiners have the discretion to request evidence on the deductibility of real-estate charges, but are not required to request it. Furthermore, the guidance to examiners lists cancelled checks, mortgage escrow statements, Forms 1098 on mortgage interest amounts, and local government real-estate tax bills as acceptable types of evidence of deductibility. However, none of these documents necessarily confirm whether all local charges can be deducted. Since IRS is already examining the deduction, the marginal cost to IRS would stem from the fact that some examinations might take slightly longer if examiners take the time to ask taxpayers to provide the correct type of evidence to substantiate their real-estate tax deduction. However, this cost could be justified to ensure compliance with the existing law. IRS also may incur some costs to expand its existing training if examiners are not adequately informed about the deduction. We identified one option that cuts across the problems facing both taxpayers and IRS and targets actions in the three areas of improving information, guidance, and enforcement. As discussed earlier, local governments could provide IRS a list of the types of charges on local real- estate tax bills that IRS could then use to help local governments determine deductibility if some charges appear to be nondeductible. However, that would impose reporting costs on all local governments and could inundate IRS with a lot of information to process, analyze, and use. In this crosscutting option, IRS would limit its data collection to larger local governments that have apparent larger nondeductible charges on their real-estate tax bills. Our work initially focused on 41 of the largest local governments because they were most likely to have large property tax revenue and because smaller local governments would have a harder time compiling the information. IRS could choose from a number of ways to identify larger local governments that appear to have larger nondeductible charges on their bills. A starting point could be the Census data we used to identify those local governments that collect the most property tax (see app. III of this report). Using these data, IRS could identify the larger local governments on which IRS could focus its data-collection efforts. For example, as an alternative to, or in addition to, requiring local governments to report the types of charges listed on their local bills, as discussed earlier, IRS could send a survey to selected local governments; collect the data through its annual NRP research on individual tax compliance for a sample of tax returns; choose to do a separate research project; collect data from annual operational examinations that touch on the real-estate tax deduction; or query its employees on the types of charges on their own local tax bills. Having received information from local governments, IRS could identify local governments whose bills have nondeductible charges that are large and unusual enough to make noncompliance and larger tax revenue losses likely to occur. Knowing which local governments have large nondeductible charges, IRS could also consider whether and how to use the data in a targeted fashion. IRS’s costs would vary with the uses pursued and the number of local governments involved. IRS could use this data to design compliance-measurement studies for those localities; begin outreach with these local governments to help determine deductible charges and help affected taxpayers correctly compute the deduction; target guidance such as mailings or public service announcements to direct taxpayers to a list of nondeductible charges, or create a tool to help taxpayers determine a deductible amount for a locality; outreach with other third parties such as tax preparers and mortgage servicers to help them better inform and guide taxpayers; and check the real-estate tax deduction for individual tax returns that have been selected for examination from taxpayers in those localities or, at a minimum, use the information when considering whether to examine one of these returns. To fully comply with the current federal law on deducting local real-estate taxes, many individual taxpayers would need to apply significant effort to determine whether all charges on a real-estate tax bill are federally deductible. However, it is likely that some taxpayers do not invest sufficient time or energy in trying to comply with federal law for determining deductibility, or may not understand how to comply, or both. Nevertheless, the total compliance burden taxpayers would bear to properly comply is one useful reference point for judging the merits of alternative means of increasing compliance. Taxpayers are responsible for determining which charges are deductible, and the burden to be fully compliant can be significant. This burden to properly comply with current federal law could be shifted from taxpayers to local governments, IRS, or third parties, or some combination of each. Along a continuum, this burden shifting could be major, such as through information reporting, or fairly minor, such as through providing taxpayers with better information or guidance to help them determine deductibility. In either case, taxpayer compliance is likely to improve and the overall compliance burden to society could possibly be lower to the extent that IRS, local governments, and other third parties can reduce the costs of overall compliance through economies of scale. Because the extent of the compliance problem is not known and some of the options we identified could significantly increase local-government or IRS burdens in order to achieve significant compliance gains, a sensible starting point is options that impose less burden shifting. Providing taxpayers better guidance on how to comply, including the information sources they need to consider, is among the least burdensome and costly means to address noncompliance with the real-estate tax deduction. Because taxpayers still would have to exercise considerable effort to comply fully, improved guidance may not materially reduce noncompliance. Providing taxpayers somewhat better information, such as real-estate bills that clearly identify ad-valorem and non-ad-valorem charges would shift more burden to local governments, but likely would have a larger effect on reducing noncompliance. Providing taxpayers traditional information reports, that is, documents that clearly identify federally tax deductible charges, would shift considerable burden to local governments and possibly IRS, but also would considerably reduce taxpayers’ compliance burden and likely result in significant compliance gains. If local governments, possibly with IRS assistance, could determine deductibility for less cost than the sum of each taxpayer’s costs in doing so, the net compliance burden for society may go down even as compliance increases. Significant reductions in noncompliance might also be achieved with minimum shifting of burdens through targeted use of the identified options for addressing noncompliance. Targeting, however, requires information about localities where there are significant risks of taxpayers claiming large nondeductible charges. If IRS learned which jurisdictions have the largest dollar amounts of nondeductible charges on their bills, it could take a number of targeted actions, such as outreach to the local governments to help them determine deductible charges, targeted outreach to taxpayers in those jurisdictions to help them correctly compute the deduction, targeted outreach to the tax-preparation and mortgage-servicer industries, and targeted examinations of the real-estate tax deduction in these localities. Low-cost options are available to obtain this information, such as collecting tax bills as part of examinations of the real-estate tax deduction that already occur annually. In terms of IRS’s examinations, IRS could send a more useful signal to taxpayers of the importance of ensuring that only deductible real-estate taxes are claimed if IRS examinations more frequently covered which charges are deductible. At a minimum, IRS can take steps to ensure that its examiners know about the problems with nondeductible charges and how to address the noncompliance. We are making 10 recommendations to the Commissioner of Internal Revenue: To enhance IRS’s guidance to help individual taxpayers comply in claiming the correct real-estate tax deduction, we recommend that the Commissioner of Internal Revenue place a stronger disclaimer early in the guidance to alert taxpayers to the need to check whether all charges on their real-estate tax bill are deductible; clarify that real-estate tax bills may be insufficient evidence of deductibility when bills include nondeductible charges that are not clearly stated; and provide information or a worksheet on steps to take to get information about whether bills include nondeductible charges and about what those charges are. To help ensure that individual taxpayers are getting the best information and assistance possible from third parties on how to comply with the real- estate tax deduction, we recommend that the Commissioner of Internal Revenue reach out to local governments to explore options for clarifying charges on the local tax bills or adding disclaimers to these bills that some charges may not be deductible; mortgage servicers to discuss adding disclaimers to their annual statements that some charges may not be deductible; and tax-preparation software firms and other tax preparers to ensure that they are alerting taxpayers that some local charges are not deductible and that they are aware of any enhancements to IRS’s guidance. To improve IRS’s guidance to its examiners auditing the real-estate tax deduction, we recommend that the Commissioner of Internal Revenue revise the guidance to indicate that evidence of deductibility should not rely on mortgage escrow statements, Forms 1098, and cancelled checks (which can be evidence of payment), and may require more than reliance on a real-estate tax bill; and require examiners to ask taxpayers to substantiate the deductibility of the amounts claimed whenever they are examining the real-estate tax deduction and they have reason to believe that taxpayers have claimed nondeductible charges that are large, unusual, or questionable. To learn more about where tax noncompliance is most likely, we recommend that the Commissioner of Internal Revenue identify a cost-effective means of obtaining information about charges that appear on real-estate tax bills in order to identify local governments with potentially large nondeductible charges on their bills; and if such local governments are identified, obtain and use the information, including uses such as compliance research focused on nondeductible charges; outreach to such local governments to help them determine which charges are deductible charges and help affected taxpayers correctly compute the deduction; targeted outreach to the tax-preparation and mortgage-servicer industries, and targeted examinations of the real- estate tax deduction in the localities. On April 22, 2009, IRS provided written comments on a draft of this report (see app. IV). IRS noted that the report accurately reflects the difficulty that many taxpayers face when local jurisdictions include nondeductible charges on real-estate tax bills, particularly when these charges can vary and are not described in detail. IRS also noted that determining deductibility can be complex and that neither the local real-estate tax bills nor mortgage service documents tell taxpayers what amounts are deductible. IRS agreed with 7 of our 10 recommendations and identified actions to implement them. Specifically, IRS agreed with 2 recommendations on enhancing guidance to taxpayers, saying it would change various publications to (1) highlight an alert to taxpayers to check for nondeductible charges on their real-estate tax bills and (2) caution that the bills may be insufficient evidence of deductibility. IRS also agreed with three recommendations on outreach to third parties to ensure that taxpayers are getting the best information possible to comply in claiming the real-estate tax deduction. IRS agreed to contact local governments, mortgage servicers, and tax software firms to explore options to alert taxpayers that some charges might not be deductible. IRS also said it would work with local governments to clarify charges on their real-estate tax bills. Further, IRS agreed with two recommendations on learning more about where noncompliance in claiming nondeductible charges is most likely and then taking action to improve compliance. IRS agreed to identify a cost-effective way to identify local governments that have potentially large nondeductible charges on their real-estate tax bills. After identifying these local governments, IRS also agreed to reach out to them to help determine the deductibility of their charges and help the affected taxpayers correctly claim the deduction. As part of this set of actions, IRS agreed to reach out to the tax preparation and mortgage servicing industries with customers in these localities. IRS disagreed with three recommendations. However, for one of the recommendations, IRS did agree to take action consistent with the intent of the recommendation. We recommended that IRS enhance its guidance to taxpayers by providing information or a worksheet on steps taxpayers could take to find out if any charges on a real-estate tax bill are nondeductible. IRS said its Publication 17 already had a chart providing guidance on which real-estate taxes can be deducted but agreed to add a caution advising taxpayers that they must contact the taxing authority if more information is needed on any charge. We believe such an action will enhance IRS’s current education efforts related to this issue and may help improve taxpayer compliance, especially if the addition provides guidance on situations in which a taxpayer may need to contact the taxing authority. The other two recommendations IRS disagreed with related to improving IRS’s guidance to its staff who audit the real-estate tax deduction. IRS did not agree to revise the guidance to clarify that mortgage escrow statements, cancelled checks, Forms 1098, and real-estate tax bills may not be sufficient evidence of deductibility. IRS also did not agree that examiners should ask taxpayers for evidence of deductibility whenever they are auditing the deduction and believe that the taxpayers have claimed nondeductible charges that are large, unusual, or questionable. IRS said that the guidance for examiners is sufficient and that examiners are to use their judgment and consider all available evidence in coming to a determination. We appreciate that examiners must exercise judgment about the scope of an audit. However, in reviewing over 100 examination files and in talking with examiners, we found that not all examiners focus on the deductibility of the real-estate charges or ask the taxpayer for adequate evidence of deductibility, even in situations where deductibility may be in question. Therefore, when examiners have reason to believe that taxpayers claimed nondeductible charges that are large, unusual, or questionable, we continue to believe they should ask taxpayers for adequate support. We also continue to believe that the guidance to examiners should clearly state that real-estate bills should be examined and that other information on the nature and purpose of tax bill charges may also be needed. This improved guidance may be especially pertinent when IRS has implemented our recommendations to identify local governments with large nondeductible charges on their bills and to take related actions to help taxpayers comply. If IRS does targeted examinations of taxpayers in those localities, the IRS examiners will need to clearly understand what evidence is required to determine the deductibility of the various charges on the real-estate bills to ensure that taxpayers are correctly claiming the real-estate tax deduction. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies to the Chairman and Ranking Member, Senate Committee on Finance; Chairman and Ranking Member, House Committee on Ways and Means; the Secretary of the Treasury; the Commissioner of Internal Revenue; and other interested parties. This report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions, please contact me at (202) 512-9110 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are found in app. V. To learn about real-estate tax billing practices and the proportion of local government entities with potentially nondeductible charges on their real- estate tax bills, we conducted a mail-based sample survey of 1,732 local governments primarily responsible for collecting real-estate taxes due on residential properties. In designing the sample for our survey, we used the survey population of the U.S. Census Bureau’s Quarterly Property Tax Survey (QPTS) as our sample frame. The QPTS is a mail survey the Governments Division of the U.S. Census Bureau conducts quarterly to obtain information on property taxes collected at the local governmental level. The QPTS is part of a larger data-collection effort that the Census Bureau conducts in order to make estimates of state and local tax revenue. According to QPTS data, 14,314 local governments bill for property taxes. The QPTS itself uses a stratified, one-stage cluster sample of local governments in 606 county areas with 16 strata. In designing a sample based on the QPTS for our survey, we also used a stratified, one-stage cluster design. Specifically, of the 606 county areas included in the QPTS sample, we selected 192 county areas representing 18 strata. Our sub- sample consists of a random selection of approximately 30 percent of the county areas in the 18 GAO strata with a minimum of 5 county areas selected in each stratum. All of the local governments within the selected county areas are included in the sample. The total number of local governments included in the sample was 1,732. Before constructing our sample, we checked to make sure that QPTS sample data provided to us by the Census Bureau were internally consistent and reliable for our purposes. In our survey, we asked the local governments whether they included non- ad-valorem charges on their real-estate tax bills, how they differentiated non-ad-valorem charges from ad-valorem charges, and whether and how they alerted taxpayers to the presence of non-ad-valorem charges on the bills. We also asked the local governments for a sample residential real- estate tax bill that included information about all possible charges for which property owners in that jurisdiction could be billed. We conducted two pretests of our draft survey instrument with officials from Alexandria, Virginia, and Montgomery County, Maryland, to ensure that (1) the survey did not place an undue burden on the respondent’s time, (2) the questions and terminology were clear and unambiguous, (3) the respondents were able to obtain data necessary to answer the survey questions, and (4) our method for requesting sample bills matched any preferences offered by the respondents. In late April 2008, we mailed questionnaires to our survey sample population using addresses of the local government entities provided to us from the Census Bureau’s Governments Division. At the end of May, we sent a reminder letter with an additional copy of the questionnaire to all governments in our survey from which we had not yet received a response. If a survey respondent’s answers required clarification (e.g., if a respondent did not follow the directions given in the survey), a follow-up call was conducted. Survey answers were then edited to reflect the additional information obtained in the calls. Of the 1,732 surveys sent, we received 1,450 responses for an unweighted response rate of 84 percent. Response rates for the jurisdictions in each of our 18 strata ranged from 67 percent to 100 percent. All percentage estimates from our survey are surrounded by 95 percent confidence intervals. In addition to sampling error, the practical difficulties of conducting any survey may introduce errors commonly referred to as nonsampling errors. For example, difficulties in how a particular question is interpreted, in the sources of information that are available to respondents, or in how the data are entered into a database or were analyzed, can introduce unwanted variability into the survey results. We took steps in the development of the questionnaire, the data collection, and the data analysis to minimize these nonsampling errors. For example, a social science survey specialist helped us design the questionnaire. Then, as stated earlier, the draft questionnaire was pretested with two local jurisdictions. Data entry was conducted by a data entry contractor and a sample of the entered data was verified. Finally, when the data were analyzed, independent analysts checked all computer programs. One of the objectives of this report was to describe factors that contribute to the inclusion of nondeductible items in real-estate tax deductions. In our 1993 report, we determined that one cause of taxpayers overstating their deductions was confusing real-estate tax bills that don’t clearly distinguish taxes from user fees. To update our previous work and to determine the extent to which real-estate tax bills currently distinguish between taxes on real property and user fees, we reviewed a sample of real-estate tax bills from local governments across the United States. This appendix outlines the methodology that we used to review these bills. The sample of real-estate tax bills that we reviewed was a subset of the responses to our mailed survey of local governments, which was a stratified, random sample of 1,732 localities (see app. I). A question in our survey asked whether the local government included non-ad-valorem items in their bills, which are generally nondeductible. In another part of our survey, we asked respondents to attach a sample of a real-estate tax bill to their completed survey. We received a total of 1,450 responses to our survey. We did not generalize the results of this bill review because not all survey respondents provided bills as requested, and because we did not know how the bills that were submitted had been selected by the respective responding governments. We received over 643 bills from governments which included nondeductible charges on their bills. Of these bills, we deemed 486 to be usable. We performed two reviews of the usable bills. First, we used three criteria to determine if a real-estate tax bill clearly distinguished taxes from user fees: 1. Does the bill differentiate ad-valorem from non-ad-valorem charges? 2. Are all the charges in the bill clearly identified and explained? 3. Does the bill contain a disclaimer warning that some of the charges included in the real-estate tax bill may not be deductible for federal tax purposes? A bill met our first criterion if either of the following applied: The bill differentiated by labeling each item as ad valorem or non–ad valorem. The bill provided millage rates for items. A bill met our second criterion if all of the line items were individually broken out AND either of the following applied: Line item descriptions were spelled out and clearly identified. Additional information or explanations regarding line items are available in paper form or electronically. A bill met our third criterion if either of the following applied: The bill contained a disclaimer stating that all items appearing on the bill may not be deductible. The bill contained a disclaimer stating that taxpayers should consult IRS code and publications or their tax advisor for assistance in determining deductibility. Through our review, we found that about 60 percent of the bills satisfied our first criterion, with almost all of these using millage rates to differentiate ad-valorem from non-ad-valorem charges. Only about 30 percent of bills satisfied our second criterion. The main reason bills did not meet our second criterion was because line-item descriptions were not easily identifiable (e.g., a taxpayer could not determine the respective charge’s use based solely on the information on the bill). None of the bills satisfied our third criterion. In our second bill review, we determined whether the real-estate tax bills provided taxpayers with either of the following: A total for the charges that are deductible for federal income tax purposes. A warning that some of the charges on the bill may be nondeductible for federal income tax purposes. Of the 486 usable bills we reviewed, none satisfied either of these two criteria. Although our sample of real-estate tax bills is not representative of local governments nationally, the results of our review illustrate that many taxpayers would face challenges in determining what is deductible if they were to rely solely on the information provided on their real-estate tax bills. This appendix describes the methodology, including sample selection, we used to (1) determine the deductibility of charges on tax bills in five counties: Alameda County, California; Franklin County, Ohio; Hennepin County, Minnesota; Hillsborough County, Florida; King County, Washington; and (2) calculate the extent of overstated deductions in two of those counties—Alameda County, California and Hennepin County, Minnesota—for tax year 2006. We derived our list of local governments that collect property taxes from the survey population of the U.S. Census Bureau’s Quarterly Property Tax Survey (QPTS). The QPTS sample consists of local governments in 606 county areas with 312 of those counties selected with certainty. The 312 counties had a population of at least 200,000 people and annual property taxes of at least $100 million in 1997. We decided that large counties would be best for this study because they were more likely to have large property tax revenue and to maintain property tax data in electronic formats that we could more easily obtain and manipulate than paper records. We started with the 41 largest counties based on property tax revenue. We randomly sorted these 41 large collectors and picked the first 5 from the sorted list that fit the team’s inclusion criteria: (1) presence of user fees, special assessments, special district taxes, or other non-ad-valorem items on real-estate tax bills for most or all residential property owners; (2) willingness of the local government to participate; and (3) usability and reliability of the data. Using these criteria, we selected Alameda County, California; Franklin County, Ohio; Hennepin County, Minnesota; Hillsborough County, Florida; and King County, Washington for our initial analyses. We collaborated with officials from the Internal Revenue Service’s (IRS) Office of Chief Counsel to determine the deductibility of charges on the five counties’ real-estate tax bills. IRS agreed to review information we provided about the charges on these tax bills in order to provide an opinion on the deductibility of the charges. IRS did not seek additional information from the counties regarding the charges, and IRS based its determinations solely on the materials we submitted. Additional information could result in conclusions different from those IRS reached as a result of the data we provided IRS. Prior to assembling information for IRS’s review, we interviewed officials from IRS’s Office of Chief Counsel to gain a better understanding of what information IRS needed to make the determinations. IRS officials provided a list of the types of information they would need to determine whether a particular assessment levied by a taxing jurisdiction was a deductible real- property tax. Specifically, IRS asked us to provide information related to the following for each charge: (1) Is the tax imposed by a State, possession, or political subdivision thereof, against interests in real property located in the jurisdiction for the general public welfare? (2) Is the assessment an enforced contribution, exacted pursuant to legislative authority in the exercise of the taxing power? Is payment optional or avoidable? (3) The purpose of the charge. Is it collected for the purpose of raising revenue to be used for public or governmental purposes? (4) Is the tax assessed against all property within the jurisdiction? (5) Is the tax assessed at a uniform rate? (6) Whether the payer of the assessment is entitled to any privilege or service as a result of the payment. Is the assessment imposed as a payment for some special privilege granted or service rendered? Is there any relationship between the assessment and any services provided or special privilege granted? (7) Is use of the funds by the tax authority restricted in any way? Are the funds earmarked for any specific purpose? (8) Is the assessment for local benefits of a kind tending to increase the value of the property assessed? Does the assessment fund improvements to or benefiting certain properties or certain types of property? If so, is a portion of the assessment allocable to separately stated interest or maintenance charges? IRS officials also indicated that the following materials would be helpful in making their determinations: (1) A copy of the statute imposing the tax. (2) Materials published by the local government or tax-collecting authority describing the levy, including taxpayer guides, publications, or manuals describing the tax. (3) The forms and instructions relating to the tax. (4) A printed copy of the Web pages maintained by the jurisdictions related to the tax. To collect this information, we interviewed county officials and reviewed documentation either provided by county officials or found on county Web sites. Most of the selected counties’ Web sites provided tax rate tables or a list of the taxing authorities for the ad-valorem charges found on the tax bills; some also had information for the non-ad-valorem charges. For each of the year 2006 tax bill charges, we searched the counties’ Web sites and used online search engines to collect supporting documentation. We also searched state constitutions and statutes to identify the legal authority for each charge on real-estate tax bills; to a varying degree, county officials provided citations to the specific statutes that provided the legislative authorities for the charges. In addition to the real-estate tax information found online, we interviewed local tax officials in each of the five local counties to gather the requested information. Based on the materials we submitted, IRS concluded that some charges were deductible, some were nondeductible, and others required information for IRS to determine their deductibility. Table 1 below summarizes the results of IRS’s determinations. Using IRS data on real-estate tax deductions claimed by taxpayers in the selected counties and county data on real-estate taxes billed to property owners, we identified how much taxpayers likely overstated their real- estate tax deductions by claiming nondeductible charges in two counties—Alameda County, California, and Hennepin County, Minnesota—for tax year 2006. We restricted our analysis to these two counties due to limitations in resources. While taxpayers can claim deductions for real-estate taxes paid on multiple IRS schedules, we limited our analysis to the amount claimed on IRS Form 1040, Schedule A, which generally does not include deductions for real estate used for business purposes. We used the SAS SQL procedure (PROC SQL) to merge the IRS data to the tax-roll data we received from our two selected counties. To conduct the match, we parsed the last name, first name, street address, city, state, and zip code from the IRS data and the local data. We conditioned the PROC SQL merge to include in the output data set only those records in which the parsed first names, last names, and zip codes matched. Prior to the match, we controlled for taxpayers who own multiple properties within each of our selected jurisdictions by using a unique identifier for each taxpayer and subtotaling the taxpayers’ ad-valorem and non-ad-valorem charges by the unique identifier. To the extent we were able, we used existing, numerical identifiers in the data—such as property number and account numbers—to produce a subtotal for each taxpayer. When the numeric identifiers available in the data were not available, we used the parsed name and address fields to create a unique identifier. After the PROC SQL merge, we controlled for duplicate records by keeping only those records where the last name, first name, street address, city, state, and zip codes matched. It is still possible that some duplicates exist in the data, since the names and address fields were recorded in disparate ways in the data we received from the counties. We used programming logic to parse the names; due to the inconsistencies in the names and address fields in the data, the name and address information may not have parsed the same way for all taxpayers. For each taxpayer that we were able to match to the county data, we compared the amount the taxpayer claimed as a real-estate tax deduction on the Schedule A return to the total ad-valorem amount each taxpayer was billed by the county and which was due in 2006. We then calculated the difference between the amount claimed on Schedule A and the ad- valorem portion of the amount billed by the county for each taxpayer. As indicated above, we worked with IRS to determine which charges billed by the county were deductible under federal tax law. The counties we selected for analysis did not maintain their tax data in a way that would allow us to itemize all of the charges, particularly the ad-valorem charges, on individuals’ tax bills. As a result, we were not able to take into account ad-valorem charges that may not be deductible in our lower-bound computation of overstated real-estate tax deductions. Instead, we used the ad-valorem portion of the amount billed as a proxy for the deductible amount. While the proxy is imperfect, it is our understanding that the non- ad-valorem charges in our selected counties were not imposed at a uniform rate and thus did not appear to be deductible as taxes under Section 164 of the Internal Revenue Code. Given the limitations of the data, this approach allowed us to take into account those charges that are least likely to be deductible. Also, the approach produced a lower-bound computation of potential noncompliance in our two counties. We can only produce a lower-bound computation due to uncertainty of noncompliance for those taxpayers where we could not match IRS and local records. To develop the lower-bound computations of potential noncompliance, we excluded those taxpayers whose claimed deduction was greater than 1.15 times the total amount billed; this was chosen as a cutoff point to account for taxpayers who may own multiple properties and therefore deduct on their federal tax return a higher amount than is shown on the local tax bills. We also excluded taxpayers whose claimed deduction was less than the ad-valorem portion of the amount billed by the county (within a small margin of error), since we did not have conclusive data to determine whether the taxpayers held only a partial ownership in the real estate covered by the local bill. We then summed the difference between the claimed Schedule A deduction and the ad-valorem portion of the amount billed by the county to develop a lower-bound computation of noncompliance for the population of taxpayers in each county that we were able to match to the county data. For the purposes of our analysis, we created two separate categories for those taxpayers who claimed a deduction that was approximately equal to the billed amount up to 1.15 times the total amount billed. We defined those taxpayers who claimed a deduction within $2 of the full amount billed, when the bill contained non-ad-valorem amounts, as “very likely overstated.” We defined those taxpayers who claimed a deduction that was greater than $1 less than the total ad-valorem amount billed but less than 1.15 times the total billed amount as “likely overstated.” In addition to the contact named above, Tom Short (Assistant Director), Paula Braun, Jessica Bryant-Bertail, Tara Carter, Hayley Crabb, Sara Daleski, Melanie Helser, Mollie Lemon, and Albert Sim made contributions to this report. Stuart Kauffman, John Mingus, Karen O’Conor, and Andrew Stephens also provided key assistance. | The Joint Committee on Taxation identified improved taxpayer compliance with the real-estate tax deduction as a way to reduce the federal tax gap--the difference between taxes owed and taxes voluntarily and timely paid. Regarding the deduction, GAO was asked to examine (1) factors that contribute to taxpayers including nondeductible charges, (2) the extent that taxpayers may be claiming such charges, (3) the extent that Internal Revenue Service (IRS) examinations focus on the inclusion of such charges, and (4) possible options for improving taxpayer compliance. GAO surveyed a generalizable sample of local governments, studied taxpayer compliance in two jurisdictions that met selection criteria, reviewed IRS documents, and interviewed government officials and others. Addressing the complexity of current tax law on real-estate tax deductions was outside the scope of this review. Taxpayers who itemize federal income-tax deductions and whose local real-estate tax bills include nondeductible charges face challenges determining what real-estate taxes they can deduct on their federal income tax returns. Neither local-government tax bills nor mortgage-servicer documents identify what taxpayers can properly deduct. Without such information, determining deductibility can be complex and involve significant effort. While IRS guidance for taxpayers discusses what qualifies as deductible, it does not indicate that taxpayers may need to check both tax bills and other information sources to make the determination. In addition, tax software and paid preparers may not ensure that taxpayers only deduct qualified amounts. There are no reliable estimates for the extent of noncompliance caused by taxpayers claiming nondeductible charges, or the associated federal tax loss. However, GAO estimates that almost half of local governments nationwide included generally nondeductible charges on their bills. While the full extent of overstatement is unknown due to data limitations, GAO estimates that taxpayers in two counties collectively overstated their deductions by at least $23 (or $46 million using broader matching criteria). IRS examinations of real-estate tax deductions focus more on whether the taxpayer owned the property and paid the taxes than whether the taxpayer claimed only deductible amounts, primarily because nondeductible charges are generally small. IRS guidance does not require examiners to request proof of deductibility or direct them to look for nondeductible charges on tax bills. Various options could improve compliance with the real-estate tax deduction, such as providing taxpayers with better guidance and more information, and increasing IRS enforcement. However, the lack of information regarding the extent of noncompliance and the associated tax loss makes it difficult to evaluate these options. If IRS obtained information on real-estate tax bill charges, it could find areas with potentially significant noncompliance and use targeted methods to reduce noncompliance in those areas. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The Tongass National Forest covers about 16.8 million acres in southeast Alaska and is the largest national forest in the United States, equal to an area about the size of West Virginia. The U.S. Department of Agriculture’s Forest Service manages the Tongass for multiple uses, such as timber production, outdoor recreation, and fish and wildlife. The Forest Service’s Alaska Region, headquartered in Juneau, Alaska, carries out the management responsibilities. Because of its magnitude, the Tongass is divided into three administrative areas—Chatham, Stikine, and Ketchikan—each having an area office headed by a forest supervisor. Each area office has between two and four ranger districts, headed by a district ranger, to carry out daily operations. In the 1950s, the Forest Service awarded 50-year (long-term) contracts to the Ketchikan Pulp Company (KPC)—now a wholly owned subsidiary of the Louisiana Pacific Corporation—and the Alaska Pulp Corporation (APC)—a Japanese-owned firm—to harvest Tongass timber. As stipulated in their contracts, each company built a pulp mill to process the harvested timber—KPC near Ketchikan and APC in Sitka. In return, the Forest Service guaranteed a 50-year timber supply totaling about 13.3 billion board feet for both contracts. KPC’s contract expires in 2004. APC’s contract was to expire in 2011, but the Forest Service terminated it for breach of contract on April 14, 1994, because APC shut down its pulp mill in September 1993. The Forest Service also sells Tongass timber to companies other than APC and KPC. These companies, referred to as independent short-term contractors, purchase timber under contracts usually lasting 3 to 5 years. Since 1980, about 30 percent of all Tongass timber sales have been made under independent short-term contracts. Although some of these short-term contracts have been awarded to APC and KPC, most have been awarded to other contractors. Since the early 1980s, the Congress has expressed concern about the adverse impacts of the long-term contracts on competition for timber in southeast Alaska and on the Forest Service’s ability to effectively manage the Tongass. Part of the concern centered on the perceived competitive advantages to APC and KPC that resulted from differences between certain provisions of the long-term and short-term independent contracts. Another part of the concern centered on the relationship of the long-term contracts to the overall management of the Tongass National Forest and, more specifically, to issues related to other forest resources such as fish and wildlife. “. . . it is in the national interest to modify the contracts in order to assure that valuable public resources in the Tongass National Forest are protected and wisely managed. Modification of the long-term timber sale contracts will enhance the balanced use of resources on the forest and promote fair competition within the southeast Alaska timber industry.” Among other things, the act directed the Secretary of Agriculture to unilaterally revise the long-term contracts in order to reflect nine specific modifications (see app. I for a complete list). A number of these modifications called for making long-term contracts consistent with short-term contracts in such respects as timber sale planning, environmental assessment, and the administration of road credits. Other provisions of the act added new environmental requirements, such as leaving timber buffers at least 100 feet in width along designated streams. Four months after the act was passed, and pursuant to one of the act’s requirements, we issued a report to the Senate Committee on Energy and Natural Resources and the House Committee on Interior and Insular Affairs. That report described the Forest Service’s revisions to the long-term contracts for each of the nine modifications and discussed whether the changes reflected the modifications specified in section 301(c) of the act. We concluded that, with the exception of dealing with the administration of road credits, the contract changes complied with the act’s requirements. We also concluded that more time would be needed to determine how these changes were actually carried out. You requested that we review the Forest Service’s implementation of certain contract modifications and other provisions of the Tongass Timber Reform Act. As agreed with your office, we focused this report mainly on two issues—road credits and timber buffers. More specifically, we determined whether credits that timber harvesters receive for building harvest-related roads are used consistently between long-term and short-term timber sale contracts and whether buffers of standing timber have been left along designated streams as the act requires, and how the Forest Service monitors the buffers’ effectiveness. During our review, we also noted inconsistencies in the Forest Service’s documentation of the environmental significance of changes to timber harvest unit boundaries after environmental impact statements had been prepared. As agreed with your office, we included an analysis of this issue in this report. To address the first objective, we analyzed the use of road credits by short-term contractors in fiscal years 1990-93 and compared this usage with road credits used by long-term contractors. Using Forest Service accounting data, we also determined the extent to which the long-term contractors had applied road credits against the cost of purchasing Tongass timber since the inception of the long-term contracts through the end of fiscal year 1993. To address the second objective, we reviewed and analyzed the results of buffer monitoring conducted in 1992 and 1993 by the Forest Service and the Alaska Department of Fish and Game, reviewed the monitoring reports for 1991-93 from the Forest Service’s Alaska Region and visited the Craig and Thorne Bay Ranger Districts within the Tongass National Forest to observe stream buffers. We also reviewed changes made in buffer-related policies and procedures by the Forest Service’s Alaska Region in 1993-94. To address the third objective, we reviewed and compared the planned harvest unit boundary maps included in the environmental impact statements with maps of the actual harvest boundaries. On the basis of discussions with the Forest Service, the state of Alaska’s Department of Environmental Conservation, and a private conservation group, we selected 19 APC timber harvest units and 41 KPC harvest units where the boundary changes may have been significant enough to require further environmental analyses. Our sample constituted about 33 percent of the APC units and 18 percent of the KPC units in which harvests had occurred outside the original boundaries. To determine the adequacy of documentation, we reviewed and analyzed harvest unit files. More specifically, we determined whether the files contained evidence that the forest supervisor had determined that the proposed boundary changes would not significantly change the effects discussed in the environmental impact statement or that the change was significant and would require a supplement to the environmental impact statement. In conducting our work, we also obtained additional information and comments from the Forest Service, the state of Alaska, timber industry officials, and representatives of conservation groups. Within the Forest Service, we performed work at the headquarters in Washington, D.C.; the Alaska Regional Office in Juneau, Alaska; the Ketchikan Area Office in Ketchikan, Alaska; and the Thorne Bay Ranger District in Thorne Bay, Alaska and the Craig Ranger District in Craig Alaska. Our work with Forest Service officials was focused on the timber management and wildlife and fisheries staffs. In September 1993, while our review was under way, APC closed its pulp mill, charging that it was losing money because the prices it paid for timber as a result of the long-term contract modifications were too high. The Forest Service responded that closure of the pulp mill constituted a breach of contract, and in April 1994 the Forest Service terminated APC’s long-term contract. Although the APC contract is not active, we elected to retain certain data on APC in this report for illustrative purposes, and also because the courts have not yet ruled on the Forest Service’s action in terminating the contract. We conducted our review between September 1992 and October 1994 in accordance with generally accepted government auditing standards. As requested, we did not obtain official agency comments on a draft of this report. However, the information in this report was discussed with timber management officials, including the Director, Timber Management Staff, at Forest Service headquarters, the Director’s counterpart in the Alaska Region, and officials in the Department of Agriculture’s Office of General Counsel. As chapter 2 will discuss, these officials disagreed with our conclusions about purchaser road credits. In other respects, however, they agreed that the information presented was accurate. We have incorporated their suggested changes where appropriate. Purchasers of timber in the Tongass National Forest often pay for part of the timber they purchase with credits they have received for building harvest-related roads. The Tongass Timber Reform Act required modifications to KPC’s and APC’s long-term contracts to ensure that credits KPC and APC received for building such roads would be provided in a manner consistent with procedures used in providing road credits to short-term contractors. This provision was aimed at eliminating KPC’s and APC’s competitive advantage of being able to maintain certain road credits for much longer periods of time than short-term contractors. As we pointed out in our March 1991 report, the Forest Service did not modify the APC and KPC contracts to address this provision of the act. Forest Service officials continue to believe this contract modification is not required. They maintain that consistency already exists because road credits are canceled at the end of all timber sale contracts, whether long-term or short-term. However, this approach leaves the long-term contractors’ competitive advantage intact and is not consistent with congressional direction that the contracts be modified. Harvesting timber often requires that the company harvesting the timber build roads to move logging equipment in and out of the harvest area and transport harvested logs. As compensation to the timber purchaser, the Forest Service gives road credits equal to the estimated cost of building the roads. Timber purchasers can use these credits instead of cash to pay for timber. Certain limitations apply to road credits used to pay for harvested timber. When the Forest Service prepares a timber sale, it establishes a base value for the timber. This base value must be paid in cash. For example, if a timber sale has a base value of $400,000 and is sold under competitive bid for $900,000, the purchaser must pay the base value ($400,000) in cash. The remaining $500,000 can be paid in whole or in part with road credits. Because timber purchasers cannot use road credits to pay the entire cost of the timber, situations may arise in which they cannot use all the road credits they have earned. To continue the example above, if the purchaser earned road credits worth $700,000, the purchaser could apply only $500,000 in credits against the cost of the timber, because the difference between the purchase price and the base value is only $500,000. Those road credits that can be applied against the cost of timber are called “effective”; those road credits left over are called “ineffective.” In this example, the timber purchaser has $500,000 of effective credits and $200,000 of ineffective credits. Under Forest Service contracts, a timber purchaser retains ineffective road credits until the expiration of the timber sale contract in which the credits are earned. Although such credits may appear valueless, for long-term contractors they can become effective—and therefore acquire value—if the timber’s purchase price is adjusted upwards to reflect higher current market values for timber. Again using the earlier example, a subsequent adjustment in the purchase price from the original $900,000 to $1 million would also mean that $100,000 of ineffective road credits would be made effective. This additional amount could be used to offset the increased purchase price. APC and KPC have made extensive use of road credits as a means of paying for timber. Each used road credits to pay for about three-fourths of the value of timber harvested under its long-term contract. Through the end of fiscal year 1993, the value of timber sold to the two companies since the inception of the long-term contracts has been about $268 million (in constant 1993 dollars). The two companies used road credits to pay for 75 percent, or $201 million, of the total price of timber. KPC used road credits to pay for 73 percent of its timber; APC used road credits to pay for 79 percent. (See table 2.1.) The Forest Service did not revise the provision on the use of road credits in its long-term contracts to make them similar to the provision in its short-term contracts, as required by the reform act. Because this modification was not made, APC and KPC have been able to use ineffective road credits from timber offering to timber offering throughout the remaining life of their long-term contracts. By contrast, ineffective road credits for short-term contracts are canceled at the end of the contracts. We pointed out this inconsistency in our March 1991 report and recommended that action be taken. The Forest Service, however, has not acted on our recommendation. The Forest Service maintained—and continues to do so—that for ineffective road credits, no modification was needed to make the treatment of road credits consistent between long-term and short-term contracts. The Forest Service believes that the treatment is consistent, in that ineffective road credits are terminated at the end of either type of contract. It maintains that the amount of time the long-term contractors could hold the credits is not relevant. Our concern about the Forest Service’s argument is that although ineffective credits are canceled at the end of both types of contracts, long-term contractors continue to hold a competitive advantage. Short-term contractors can use ineffective road credits only during the length of their contracts, which are considerably shorter than the 50-year long-term contracts—short-term contracts usually last 3 to 5 years. The long-term contractors are able to keep these credits available for possible use over a longer period by transferring them from timber offering to timber offering. Their competitive advantage is that they have greater ability to retain and use ineffective credits to offset timber payments if the price of timber rises during the life of their contracts. In our view, the language of the Tongass Timber Reform Act, as well as its legislative history, makes it clear that the Congress intended the Forest Service to make changes in road credits so that they would be treated substantially the same under both long- and short-term contracts. Comparisons between the two types of contracts show that this competitive advantage can be substantial. For example, as of March 1993, APC and KPC held $5.4 million in ineffective road credits; four short-term contractors held $3 million in ineffective road credits. The contracts held by the short-term contractors are scheduled to expire in 1995 and 1996, at which time any remaining ineffective credits will be canceled. By contrast, KPC retains the ability to convert or transfer its ineffective credits between offerings until the year 2004. APC would have been able to carry forward its ineffective credits to 2011 had its contract not been terminated. The following are more specific illustrations of how KPC has been able to use ineffective road credits in ways that short-term timber contract holders cannot: In March 1992, KPC transferred $7,510,248 in road credits it had received from five previous timber offerings back to the long-term contract’s main account for use in subsequent offerings. Of this amount, only $26,086 was effective road credits. Had the credits been treated consistently with those of short-term contracts, KPC would not have been able to transfer the $7,484,162 in ineffective credits. In January 1993, KPC paid cash in the amount of $407,747 instead of using road credits for timber that it had harvested. Had this been a short-term contract, the financial transaction would have been closed and the credits could not have been used. However, because it was under a long-term contract, KPC was able to transfer ineffective road credits from other offerings to this one, replace the cash with ineffective credits, and thus receive a refund of the cash it paid above the base rate. In our March 1991 report, we noted that the Forest Service did not modify the long-term timber sales contracts to comply with the requirements of the reform act that road credits be treated substantially the same under both long- and short-term contracts. We pointed out that the language of the Tongass Timber Reform Act, as well as its legislative history, makes it clear that the Congress intended the Forest Service to make changes in road credits so that they would be treated substantially the same under both long- and short-term contracts. In that report, we recommended that the Forest Service revise the contracts accordingly. We continue to believe that ineffective road credits resulting from each timber offering should be canceled under KPC’s long-term contract after each timber offering is completed. Unless the Forest Service revises KPC’s long-term contract to bring this change about, KPC will continue to have a competitive advantage over short-term timber contract holders. Our conclusions would also be applicable to APC if the Forest Service had not terminated APC’s long-term contract or if for some reason APC’s contract is reinstated in the future. In its response to our earlier report and in its discussions on a draft of this report, the Forest Service has continued to maintain that its current policy complies with the act and intends to take no action to modify the provision for road credits in long-term contracts. The Forest Service maintains that the treatment of road credits is consistent, in that ineffective road credits are terminated at the end of either type of contract. They maintain that the length of time that the long-term contractors can hold the road credits is not relevant. Our concern about the Forest Service’s argument is that although ineffective credits are canceled at the end of both types of contracts, long-term contractors continue to hold a competitive advantage. Their competitive advantage is that they have greater ability to retain and use ineffective credits to offset timber payments if the price of timber rises during the life of their contracts. In our view, the language of the Tongass Timber Reform Act, as well as its legislative history, makes it clear that the Congress intended the Forest Service to make changes in road credits so that they would be treated substantially the same under both long-and short-term contracts. In light of the Forest Service’s position that it needs to take no action to comply with the Tongass Timber Reform Act’s provision on road credits, the Congress may wish to consider directing the Secretary of Agriculture to modify the Ketchikan Pulp contract so that ineffective road credits generated during a timber offering would be canceled after the timber offering is completed. The Tongass Timber Reform Act directs the Forest Service to protect fish and wildlife habitat in streamside, or “riparian,” areas of harvest units by designating 100-foot buffers of timber to be left standing along the sides of many streams in timber harvest areas. During inspections of these buffers in 1992 and 1993, however, both the Forest Service and the state of Alaska found buffers that, at some point along their length, did not meet the minimum 100-foot width requirement. The Forest Service has since taken sufficient steps to ensure greater compliance with this requirement. The Forest Service’s management plan for the Tongass National Forest, as well as its agreement with the state of Alaska for managing water quality, calls for monitoring the effectiveness of buffers. We found that before 1994, the Forest Service’s monitoring efforts had been limited in scope and often did not include measurements against important criteria that could help determine how effectively buffers were working. This situation was partly the result of the lack of specific monitoring guidance from the Alaska Regional Office. In fiscal year 1994, the Forest Service implemented a new program to monitor buffers’ effectiveness that, among other things, provides clearer direction for the types of information to be gathered. The reform act requires that timber harvesters leave 100-foot buffers of standing timber along two classes of streams in the Tongass National Forest—class I streams and class II streams that flow directly into class I streams: Class I streams are perennial or intermittent streams that (1) are direct sources of domestic-use water; (2) provide spawning, rearing, or migration habitat for migratory and resident fish; or (3) have a major effect on the water quality of another class I stream. Class II streams that flow directly into a class I stream are perennial or intermittent streams that (1) provide spawning and rearing habitat for resident fish or (2) have moderate influence on the water quality of other class I or class II streams. Such buffers are designed to protect riparian areas, which are important in such ways as providing fish and wildlife habitat, protecting stream channels and stream banks, and stabilizing floodplains. Whenever the stream lies within the harvest area, the act requires a 100-foot buffer on each side. Whenever the stream forms a boundary of the harvest area, the buffer must be at least 100 feet wide on the side where timber is to be harvested. The act required buffers for those timber harvest units from which timber was either sold or released for harvest on or after March 1, 1990. The Forest Service took two main steps to implement this provision of the act. First, it modified APC’s and KPC’s long-term contracts to require that buffers of at least 100 feet be established along class I and class II streams. Second, the Forest Service modified its regional Soil and Water Conservation Handbook in February 1991 to incorporate changes resulting from the act. The handbook now identifies the management practices needed to maintain and protect water quality and fisheries habitat and to minimize adverse effects on riparian areas from logging and other land-disturbing management activities. The handbook’s changes reinforce the importance of the buffers by calling for special attention to land and vegetation for 100 feet from the edges of all streams, lakes, and other bodies of water. Under an agreement with the Alaska Department of Environmental Conservation, the Forest Service is to monitor how well the buffers have been implemented. Among other things, the Forest Service is to determine whether established buffers comply with applicable standards and guidelines, including checking whether the buffers are at least 100 feet wide. In addition to the Forest Service’s monitoring, the Alaska Departments of Fish and Game and Environmental Conservation monitor buffer widths. On-site monitoring inspections during 1992 and 1993 by the Forest Service and the Department of Fish and Game of portions of KPC’s and APC’s buffers showed instances in which the 100-foot minimum requirement was not met. More specifically: In September 1992, the Department of Fish and Game reported that during an inspection of harvest units on northern Prince of Wales Island, at least 16 of the 20 buffer measurements taken did not meet the 100-foot requirement. The narrowest portions of the buffers measured were about 50 feet wide, and portions of 11 buffers were less than 75 feet wide. In October 1992, Thorne Bay Ranger District staff made 132 buffer measurements and found that portions of 38 buffers—almost 29 percent—were less than 100 feet wide; most were narrower by 10 to 20 feet. In July 1993, an interdisciplinary team from the Sitka Ranger District reviewed more than 120 timber harvest units and found that portions of the buffers in more than 100 of the units were less than 100 feet wide. However, these buffers were usually only narrower by a few feet. The inspectors noted that such factors as uneven terrain, dense vegetation, and meandering, multichannel stream courses can lead to errors in designating buffers and adhering to minimum widths across the many miles of riparian areas affected by timber harvests. Changes have been made to address the problems identified in the inspections of buffer widths by the Forest Service and the Alaska Department of Fish and Game. Each of the three area offices of the Tongass National Forest—Ketchikan, Stikine, and Chatham—recognized the need to take corrective action to attain a higher degree of conformity with the requirement and have taken actions to ensure greater compliance. The Ketchikan area office, where the greatest concentration of buffers exists, provides an example. In March 1993, in response to a December 1992 directive from the area office, the area’s three district rangers reported that corrective actions had either been taken or would be taken in the near future. For example, the rangers said that a certification statement on buffer widths had been added to the planning documents for all harvest units, cloth tapes and laser guns were being used to provide precise measurements of buffer widths, and district personnel received training on buffer measurements and other aspects of harvest unit layout. Similar steps have been taken or are under way in the Stikine and Chatham areas. We believe the steps taken at the area and district levels will help ensure that buffers with the appropriate widths are established. The Tongass Land Management Plan and the Forest Service’s agreement with the Alaska Department of Environmental Conservation specify that the Forest Service is to monitor the effectiveness of its projects, activities, and practices. As part of its monitoring effort, the Forest Service is to determine if buffers have been effective in minimizing the adverse effects that logging and other land-disturbing activities could have on riparian areas. We found that before 1994, the Forest Service did not have a regional program to monitor the buffers’ effectiveness. Each of the area offices had its own monitoring procedures. However, these procedures to monitor buffer effectiveness were limited in scope and often did not include measurements against important criteria (such as water quality) needed to determine how effectively buffers were working. For example, within the Stikine area, monitoring of the buffers’ effectiveness consisted of visual observations of the extent to which the buffers contained timber that had been blown down by wind. While these observations yielded insights into the relative lack of effectiveness of buffers with blown-down timber, the focus on this single characteristic left many questions about effectiveness unaddressed. Similarly, the Ketchikan area limited its monitoring efforts to steep, deeply cut drainages. Again, the efforts yielded useful information, but the effectiveness of buffers that did not fall into this one limited category went largely unaddressed. According to Stikine area officials, the lack of sufficient funds, staff, and monitoring objectives were the primary reasons why monitoring buffers’ effectiveness has been limited. In addition, Ketchikan area officials told us that more specific direction was needed from the Alaska Regional Office identifying the kinds of information needed to monitor buffers’ effectiveness. Alaska Regional Office officials said that they initiated a monitoring project in 1992 that would lead to establishing a regionwide program to monitor buffers’ effectiveness. The project reviewed the condition of buffers, evaluated their effectiveness at maintaining riparian habitat and water quality, and recommended improvements to buffers’ design. The project identified six types of information for use in assessing buffers’ effectiveness, including measuring the volume of large woody debris in a stream and determining the stability of stream banks. According to the regional office monitoring coordinator, the project was tested at eight sites in the Chatham area in 1993. For example, in June 1993 the Forest Service and the Alaska Department of Environmental Conservation jointly monitored the effectiveness of two buffers along a class II stream. The environmental specialist with the Alaska Department of Environmental Conservation told us preliminary indications showed that the two buffers were meeting expectations in being able to protect riparian areas. The regional office monitoring coordinator also told us that the 1994 buffer monitoring plans for each of the area offices included the types of information identified as contributing to the evaluation of buffers’ effectiveness in the eight-site project. Currently, each of the three areas is also participating in a multiyear, forestwide study of the stability and effectiveness of stream buffers. According to the regional monitoring coordinator, the interim results of the study will be available in the spring of 1995. The Forest Service has taken steps to improve both monitoring the width of buffers and evaluating their effectiveness. These steps should help ensure that buffers more consistently meet minimum width requirements and that their overall effectiveness is assessed more systematically. Because the buffer requirement is relatively new and because the effectiveness of buffers has been studied only to a limited degree, more time will be needed to determine how well they are working to help protect fish and wildlife habitat in timber harvest areas. If the boundary of a timber harvest unit is changed after the environmental impact statement (EIS) for the area has already been prepared, the Forest Service’s policy requires that forest supervisors determine and document whether the changes are environmentally significant enough to require additional environmental study. Forest supervisors were not, in all cases, documenting the environmental significance of the harvest units’ boundary changes or the need for additional analysis beyond what had been described in the existing EIS. This was particularly the case for KPC’s harvest units. We examined 41 instances in which boundary changes had occurred in areas harvested by KPC and found that in 39 instances the documentation was not adequate. In 17 instances, there was no documentation at all, and in 22 instances the documentation had not been reviewed according to the Forest Service’s policy. We also examined 19 instances in which boundary changes had occurred in areas harvested by APC and found that adequate documentation was present in 18 of them. As a result, the Forest Service had no assurance that the environmental consequences of the boundary changes were analyzed. During our review, in October 1993 the current forest supervisor responsible for KPC’s harvest units sent instructions to district rangers detailing a process for assessing boundary changes and specifically stated that he would document the environmental significance of any changes and the need for any additional environmental analysis. Under the Forest Service’s policy and in compliance with the National Environmental Policy Act, the Forest Service is required to assess the environmental impacts of proposed timber harvests and prepare an EIS. Among other things, an EIS documents the location and design of the planned timber harvest units within the area covered by the timber offering and identifies the volume of timber to be cut. For a number of reasons, the boundaries of timber harvest units analyzed in the EIS may subsequently be revised. At the time the EIS is developed, precise information about the volume of economically harvestable timber, unique habitat for endangered species, or other specific characteristics of the land may not be known with complete accuracy. For example, more detailed on-site review could show that the planned boundaries contain less harvestable timber than originally projected or that additional eagle nesting areas or streams requiring buffer protection might be found. To deal with such circumstances and still provide the needed volume of harvestable timber, boundary adjustments may be needed. However, by this time the EIS may have been developed, made available for comment, and approved. The Forest Service’s policy contains several requirements for assessing and documenting the environmental effects of boundary changes made after environmental review has already been completed. The EIS specifies that for any proposed action (such as a boundary change) that deviates from a planned activity, the forest supervisor is to document the environmental significance of the proposed action. In doing so, if the forest supervisor determines that the impacts of the change do not deviate significantly from the impacts discussed in the EIS, the timber sale can proceed without further environmental study. However, if the forest supervisor determines that the change is significant, a supplemental EIS must be prepared. Contrary to the Forest Service’s policy, forest supervisors had not in all cases documented the environmental significance of changes to harvest unit boundaries or the need for additional environmental analysis—particularly for KPC’s harvest units. This situation occurred primarily because the forest supervisor inappropriately delegated his authority to district rangers to determine if boundary changes were signifcant and did not require the district rangers to provide documentation if they determined that the change was not significant. The Forest Service’s policy does not allow this authority to be delegated to district rangers and in all cases requires documentation of the environmental significance. We reviewed the files for 60 harvest units—19 for APC and 41 for KPC—that had boundary changes after the EIS had been prepared. These units represented about 33 percent of APC’s units and 18 percent of KPC’s units in which harvests had occurred outside the original boundaries. Adequate documentation was present in 18 of the 19 files for APC’s units but in only 2 of the 41 files for KPC’s units. More specifically, for KPC’s units, 16 units had no documentation at all of the environmental significance of 1 unit had adequate documentation of the environmental significance of one boundary change but no documentation for a second boundary change, and 22 units had documentation prepared by someone other than the forest supervisor—such as a district ranger—with no indication that the forest supervisor had reviewed the results. Guidance from the region places the responsibility for such determinations with the forest supervisor. Documentation of environmental impacts is important because it clearly demonstrates that the impacts were considered. However, the lack of documentation goes beyond simply being out of compliance with the Forest Service’s policy. When no documentation was present in the file, the Forest Service had no assurance that the environmental significance of the boundary changes had actually been analyzed. While the absence of a forest supervisor’s review of documentation may seem of less concern than the absence of documentation altogether, the absence of review has been a concern that the Forest Service has tried to correct for some time. In a November 1990 review, personnel in the Alaska Region noted that the forest supervisor responsible for KPC’s harvest units at that time had inappropriately delegated to others the authority to make determinations about the environmental significance of boundary changes. Contrary to the Forest Service’s policy, the delegation of authority did not require documentation if it was determined that the boundary change was not significant. The Alaska Region personnel recommended that the delegation of authority be withdrawn. When those personnel followed up in February 1992, they noted that the practice had apparently stopped since the forest supervisor had verbally withdrawn his delegation of authority. However, 9 of the 22 instances we examined in which the forest supervisor’s review was lacking occurred after February 1992. We discussed our findings with the current forest supervisor and he agreed that there was a need for better documentation of boundary changes and their significance. In October 1993, the forest supervisor sent a letter to district rangers setting forth a detailed five-step process for assessing boundary changes and specifically stating that the forest supervisor will determine the significance of any changes and the action necessary. The Forest Service needs to ensure that the problems of missing or inadequate documentation of the environmental significance of boundary changes to timber harvest units are addressed. In recent years, although the problem has been noted, progress in correcting it has been slow. Improved compliance is important in providing assurance that environmental concerns associated with timber harvesting activities under long-term contracts have been fully addressed. Accordingly, we believe the Alaska Regional Office needs to continue its oversight of forest supervisors’ compliance with the documentation requirements for changes to harvest unit boundaries that are made after the EIS have been issued. To ensure full consideration and disclosure of the environmental impacts of boundary changes to harvest units, we recommend that the Secretary of Agriculture direct the Chief of the Forest Service to require Alaska Regional Office officials to periodially check to ensure that forest supervisors are properly documenting the environmental significance of boundary changes to timber harvest units made after EIS’s have been issued in the Tongass National Forest. We discussed the facts and our conclusions with the Forest Service officials responsible for timber management activities at headquarters and the Alaska Regional Office. These officials generally agreed with our facts and conclusions concerning documenting changes to timber harvest units and provided some technical clarifications that we incorporated, as appropriate. | Pursuant to a congressional request, GAO reviewed the Forest Service's implementation of certain unilateral modifications to long-term contracts in Alaska and other requirements of the Tongass Timber Reform Act, focusing on whether: (1) road credits are used consistently between long-term contracts and short-term contracts; (2) buffers of standing timber have been left along designated streams as required; and (3) the Forest Service is requiring full documentation of environmental effects whenever changes are made to timber harvest area boundaries. GAO found that: (1) the Forest Service believes it treats road credits consistently across all contracts, since unused road credits are cancelled at the end of all timber sales contracts; (2) the long-term contractors' ability to carry unused road credits forward for longer periods than short-term contractors gives them an unfair competitive advantage; (3) some streamside buffers did not meet the 100-foot minimum width during the first years immediately following the act's passage, but the Forest Service has since taken steps to enforce this requirement; (4) in 1994, the Forest Service issued guidance and initiated a new monitoring program to ensure the buffers' effectiveness; (5) the Forest Service often does not document the environmental effects of timber harvest boundary changes; (6) in some instances, the forest supervisor has inappropriately delegated his documenting authority to district rangers and waived documentation where he believed boundary changes were insignificant; and (7) the forest supervisor has since withdrawn the authority delegation and established a detailed process for assessing boundary changes. |
You are an expert at summarizing long articles. Proceed to summarize the following text:
The Business Opportunity Development Reform Act of 1988 amended the Small Business Act to establish an annual governmentwide goal of awarding not less than 20 percent of prime contract dollars to small businesses. The Small Business Reauthorization Act of 1997 further amended the Small Business Act to increase this goal to not less than 23 percent. To help meet this goal, SBA annually establishes prime contract goals for various categories of small businesses for each federal agency. Although SBA is responsible for coordinating with executive branch agencies to move the federal government toward this mandated goal, agency heads are responsible for achieving the small business goals with their agencies. A 1978 report by the Senate Select Committee on Small Business noted that officials who were responsible for advocating for small business participation in federal government procurement often did not hold positions that were high enough in the agency structure to be effective. The 1978 amendment to the Small Business Act that established section 15(k)(3) addressed this issue by establishing a direct reporting relationship between the OSDBU director and the agency head or deputy head. The statute, as amended, specifies that the director would have supervisory authority over OSDBU staff, implement and execute the functions and duties under the relevant sections of the Small Business Act, and identify proposed solicitations that involved the bundling of contract requirements. (The Small Business Reauthorization Act of 1997 defines the bundling of contract requirements as the consolidation of two or more procurement requirements for goods or services previously provided or performed under separate smaller contracts into a solicitation of offers for a single contract that is likely to be unsuitable for award to a small business concern for various reasons.) Section 15(k) of the Small Business Act lists eight functions of OSDBU directors, as follows: Identifying proposed solicitations that involve significant bundling of contract requirements. Working with agency procurement officials to revise such proposed solicitations to increase the probability of participation by a small business. Facilitating the participation of small businesses as subcontractors if solicitations for bundled contracts are to be issued. Assisting small businesses in obtaining payments from an agency with which it has contracted. Helping small businesses acting as subcontractors to obtain payments from prime contractors. Making recommendations to contracting officers as to whether particular requirements should be set-aside for small businesses. Maintaining supervisory authority over OSDBU personnel. Cooperating and consulting on a regular basis with SBA in carrying out OSDBU functions and duties under the Small Business Act and assigning a small business technical advisor to each office with an SBA-appointed procurement center representative. (A procurement center representative is an SBA staff member assigned to federal buying activities with major contracting programs to carry out SBA policies and programs.) Acquisition Regulation Supplement for the Department of Defense. F purpose of our 2003 and 2010 surveys, we divided the procurement process into four steps: Acquisition planning involves developing an overall management strategy in for the procurement process for a potential contract. It takes place well advance of a contract’s award date and generally involves both a close partnership between the program and procurement offices and the involvement of other key stakeholders. Solicitation development to submit offers, or bids. is the process of preparing requests for vendors Proposal evaluation occurs after potential contractors submit proposa that outline how they will fulfill the solicitation requirements. Agency personnel evaluate proposals for award and contracting officers award the contract. Monitoring, also known as surveillance, helps to determine a c progress and identify any factors that may delay performance. To carry out the functions listed in section 15(k) of the Small Business OSDBU directors provide advice on small business matters and collaborate with the small business community. Some of the primary duties of OSDBU directors include advising agency leadership on small business matters; providing direction for developing and implementing policies and initiatives to help ensure that small businesses have the opportunity to compete for and receive a fair share of agency procurement; representing the agency at meetings, workgroups, and conferences related to small business; initiating and building partnerships with the small business community; providing agency acquisition and program personnel with leadership and oversight of education and train related to small business contracting; conducting reviews of small business programs; and serving as the agency liaison to SBA, including providing annual reports on agency activities, performance, and efforts to improve performance. OSDBU directors are not the only officials responsible for helping small businesses participate in federal procurement. At the agency level, the heads of procurement departments are responsible for implementing thesmall business programs at their agencies, including achieving program goals. Generally, staff within agency procurement departments who are assigned to work on small business issues (small business specialist coordinate with OSDBU directors on their agencies’ small business programs. s) We found that 9 of the 16 agencies we reviewed were in compliance with the Small Business Act’s requirement that OSDBU directors be responsible only to and report directly to the agency or deputy agency head (se e table 1). We determined that the remaining seven agencies were not in compliance. These agencies, which use various reporting structures, a were not in compliance in 2003, when we last assessed the reporting structure. The OSDBU directors at the compliant agencies cited benefit the reporting relationship. OSDBU directors at agencies that were not complying with section 15(k)(3) differed in their of reporting to the agency head or deputy head. Business Act. However, none of the legal arguments that the agencies raised caused us to revise our conclusions or recommendations. For example, the Department of the Interior stated that the Assistant to whom the OSDBU director reported was the agency head for acquisition matters, in accordance with the FAR. We responded that Interior’s designation of the Assistant Secretary as its “agency head” for procurement powers did not mean that the person thereby became its agency head for purposes of section 15(k)(3). Several agencies also commented on the effectiveness of their small business programs and reporting structure. During our interviews for this report, officials generally did not state that their agencies were complying with the requirement. Rather, they commented on how their current reporting structures were working. For example, officials at five agencies stated small business matters were not suffering as a result of the structure. At NASA, the Administrator is the head of the agency. OSDBU directors reported to the Deputy Secretary. For instance, at the Departments of the Air Force, Army, and Navy, the OSDBU directors reported to the Under Secretary for their respective departments, and the Under Secretary signed their performance appraisals. At Energy and HHS, the OSDBU directors reported to the Deputy Secretary, who also assessed their performance. In our 2003 report, we concluded that the Department of Education was not in compliance with section 15(k)(3). At that time, the OSDBU director did not report only to the Deputy Secretary but also reported to the Deputy Secretary’s Chief of Staff. Since then, the agency has changed its reporting structure to ensure that the OSDBU director is responsible only to the Deputy Secretary. The OSDBU director stated that she now had direct access to the Deputy Secretary, meeting with him for routine discussions about small business activities and related issues. In addition, the Deputy Secretary signed her performance appraisals. We also concluded in 2003 that EPA was not in compliance because the OSDBU director was not responsible only to the Administrator or Deputy Administrator. At that time, the OSDBU director told us she reported to the Deputy Chief of Staff, who also evaluated the OSDBU director’s performance. For this review, the OSDBU director told us that, while some matters were routed through the Deputy Chief of Staff, she ultimately reported to the Deputy Administrator. The Deputy Administrator signed her two most recent performance appraisals. As shown in table 1, seven agencies were not in compliance with section 15(k)(3). All of these agencies were also not in compliance in 2003. As in our prior report, we found that a variety of reporting structures were in place. OSDBU directors either reported to lower-level officials than the agency head or deputy or had delegated their OSDBU director responsibilities to officials who did not report to either the agency head or the deputy head. However, these arrangements do not meet the law’s intent and undermine the purpose of section 15(k)(3). To help ensure that the OSDBU’s responsibilities are effectively implemented, the act mandates that the OSDBU director—that is, the person actually carrying out the responsibilities—have direct access and be responsible only to the agency head or deputy. Appendix II provides details of the reporting arrangements at each agency. At the Departments of Commerce, the Interior, and Justice and the Social Security Administration (SSA), the OSDBU directors reported to officials at lower levels than the agency head or deputy head. For example, at the Department of Commerce, the organization chart showed that the OSDBU director reported to two lower-level officials—the Deputy Assistant Secretary for Administration and the Assistant Secretary for Administration. At the Department of the Interior, the OSDBU director reported to the Deputy Assistant Secretary for Budget, Finance, Performance, and Acquisition and to the Assistant Secretary, Policy, Management and Budget. At the Justice Department, OSDBU officials told us that the current reporting structure was the same as in 2003. The OSDBU is located within the Justice Management Division, with the director under the supervision of the Deputy Assistant Attorney General for Policy, Management and Planning. SSA also had the same reporting structure that we documented in our 2003 report, with the OSDBU director reporting to the Deputy Commissioner, Office of Budget, Finance and Management, who is one of nine deputy commissioners managing programs and operations. The designated OSDBU directors at the Departments of Agriculture, State, and the Treasury delegated their responsibilities to officials who did not directly report to either the Secretaries or Deputy Secretaries. These arrangements were the same as those we determined in 2003 were not in compliance with the Small Business Act. At these agencies, an Assistant Secretary who managed the agency’s administrative functions was designated as the statutory OSDBU director. The Assistant Secretaries then delegated nearly all of their OSDBU responsibilities to lower-ranking officials who reported directly to the Assistant Secretaries. The lower- ranking officials thus became the de facto OSDBU directors. At the Department of Agriculture, for example, the designated OSDBU director was the Assistant Secretary for Administration, who reported to the Secretary and Deputy Secretary. However, the Assistant Secretary had delegated nearly all of his OSDBU responsibilities to a lower-level official who did not have direct access to the agency head or deputy head. At the Department of State, the Assistant Secretary for Administration was the designated OSDBU director. The Assistant Secretary, who reported to one of the department’s two Deputy Secretaries on small business matters, had delegated his OSDBU responsibilities to the Operations Director for the OSDBU, who reported directly to him. At Treasury, the Assistant Secretary of the Treasury for Management/Chief Financial Officer/Chief Performance Officer was the designated OSDBU director. However, the Director of the Office of Small Business Programs, an official who did not directly report to either the Secretary or the Deputy Secretary, was responsible for the day-to-day management of Treasury’s small business programs. The Director of the Office of Small Business Programs told us she spent 100 percent of her time on small business issues. OSDBU directors’ opinions varied on the importance of reporting only to the agency head or deputy head. The OSDBU directors at the nine agencies that were complying with section 15(k)(3) cited positive elements to this reporting relationship. Five of the nine OSDBU directors stated that reporting to the agency head or deputy showed top-level support for small business efforts that sent a message to the rest of the agency. For example, one OSDBU director explained that reporting directly to the agency head or deputy head helped ensure that he was viewed as equal to other senior managers. He noted that this relationship was important because it allowed him to participate in senior management meetings where decisions were made. Another OSDBU director stated that she had a strong relationship with senior management and did not hesitate to invite senior leaders to participate in small business outreach events. She added that if she did not report to the agency head or deputy, she would lose this rapport with senior leadership. OSDBU directors at the seven agencies that were not complying with section 15(k)(3) differed on the importance of reporting to the agency head or deputy head. For example, two of these OSDBU directors thought that not reporting to the agency head or deputy was a problem. One director stated that reporting to the agency head or deputy could provide the OSDBU with more authority and enable it to collaborate more effectively with other offices. The other director noted that being too far down the reporting structure meant that she could not independently voice her opinion, especially when it differed from her supervisor’s. The OSDBU directors at the other five agencies did not see problems with the existing structure, stating that small business matters were not suffering as a result of the structure. For instance, one director stated that his agency’s structure worked well and that the agency’s small business initiatives were resulting in high marks on the SBA scorecard and effective relationships with other agency officials. This official noted that if he were to report directly to the Secretary or Deputy Secretary, small business efforts would compete against significant national foreign policy priorities. Another director stated that the OSDBU was getting all of the support it needed under the current reporting relationship. The director further explained that the office did not have a problem with resources and that he had a strong relationship with his supervisor. Additionally, he noted that any areas needing attention were communicated to higher management. Yet another director stated that his agency had a successful small business procurement program. He also cited accomplishments such as meeting small business contracting goals and increasing the number of small businesses with which the agency interacted. However, the Small Business Act requires that the OSDBU director have direct access to the agency head or deputy to help ensure that the OSDBU’s responsibilities are effectively implemented. As such, the statements made by the OSDBU directors at these five agencies do not justify their noncompliance with section 15(k)(3). SBA officials said the agency had also raised concerns about compliance with the reporting requirement during its surveillance reviews of federal agencies. These reviews are evaluations of small business contracting that assess (1) management of the small business programs, (2) compliance with regulations and published policies and procedures, (3) outreach programs focusing on small businesses, and (4) procurement documentation. When SBA finds that an agency does not have the required reporting relationship, it identifies this as a deficiency in the review report. Ongoing noncompliance with section 15(k)(3) undermines the intent of the act and may prevent some OSDBU directors from having direct access to top agency management. Given how long these agencies have not been in compliance with the requirement, at a minimum they have an obligation to explain their noncompliance to Congress and provide support for their need, if any, for greater statutory flexibility in establishing a reporting structure for their OSDBU director. Like the results of our 2003 survey, the responses of the 25 OSDBU directors we surveyed in 2010 indicated that they generally focused their procurement activities on the functions listed in section 15(k) of the Small Business Act. Most OSDBU directors reported they viewed five of the eight functions identified in section 15(k) as among their offices’ current duties, but the extent to which the individual OSDBUs carried out each activity varied. Directors who did not view a section 15(k) function as their responsibility generally reported that contracting, acquisition, or program staff performed it. Section 15(k) lists the functions of OSDBU directors but does not necessarily require them to personally carry out these activities themselves. Few OSDBU directors viewed non-15(k) procurement activities such as developing solicitations and evaluating proposals as roles of their OSDBU. For this report, we asked 25 OSDBU directors which of the responsibilities listed in the Small Business Act they saw as responsibilities of their offices. As shown in figure 1, at least 19 of the 25 OSDBU directors that we surveyed reported they viewed five of the eight functions identified in section 15(k) of the Small Business Act as current duties of their office. These five functions included (1) having supervisory authority over OSDBU staff, (2) three functions involving contract bundling (that is, the consolidation of two or more procurement requirements for goods or services previously provided under separate smaller contracts), and (3) assisting small businesses to obtain payments from agencies. Fewer OSDBU directors (10 to 18) viewed the remaining three functions— reviewing individual acquisitions for small business set-asides, assisting small businesses to obtain payments from prime contractors, and assigning a small business technical advisor to offices with an SBA representative—as their responsibilities. The data show little change from the responses to our 2003 survey. We also asked OSDBU directors about the extent to which they carried out six of the eight 15(k) functions, and their responses varied. Over half of those OSDBU directors who responded to the contract bundling questions reported that they carried out these functions to either a great or very great extent (see table 2). In contrast, six OSDBU directors reported having assisted small businesses to obtain payments from their agencies to a great or very great extent. Even fewer (two) reported having assisted small businesses to obtain payments from prime contractors to a great or very great extent. Of the 18 OSDBU directors who reported that reviewing or determining individual contracts that should be set aside for a small business was a function of their OSDBU, 13 stated they reviewed proposed small business set-asides for individual acquisitions in all or most cases. In their written comments, nine directors noted that they reviewed all acquisitions exceeding a certain amount for small business set-aside determinations. For instance, one of these directors explained that the agency had a regulation that prescribed policies, responsibilities, and procedures for clearing contracts over the simplified acquisition threshold ($150,000) that were not set aside or reserved for small business participation, including bundled contracts. We also asked the OSDBU directors to indicate the extent to which they cooperated and consulted with SBA in carrying out their responsibilities. Twenty-one directors reported that they cooperated and consulted with SBA to a great or very great extent. In their written comments, more than half of the directors noted they participated in SBA-sponsored activities and initiatives. For instance, 13 reported attending or sending staff to monthly SBA Small Business Procurement Advisory Council meetings. The number of OSDBU directors surveyed who did not view a section 15(k) function as their current responsibility varied, depending on the specific function. The number ranged from 1 who did not view maintaining supervisory authority over OSDBU personnel as a function to 11 who did not view assisting small businesses to obtain payments from prime contractors as a responsibility. In their written comments and follow-up interviews, the directors who did not view a section 15(k) function as their responsibility generally stated that contracting, acquisition, or program staff performed it. It was not clear from our survey results the extent to which OSDBU directors are involved in those functions carried out by other agency staff. Appendix III provides details on the agency personnel other than OSDBU staff who carry out certain section 15(k) functions. In 2010, a smaller number of OSDBU directors than in 2003 viewed additional procurement activities such as developing solicitations, evaluating proposals, developing factors for evaluating solicitations, and monitoring small businesses as roles of the OSDBU (see fig. 2). For example, 3 directors reported that developing proposed solicitations was a role of the OSDBU in 2010, compared with 9 directors in 2003. The majority of the 22 directors who reported they did not carry out this function commented that their agencies’ contracting offices performed this role. Of these 22 directors, 6 reported that the OSDBU played a collaborative role, such as reviewing solicitation language. Additionally, in 2010, 11 directors viewed developing evaluation factors for solicitations at their agencies as a role of the OSDBU, compared with 15 in 2003. However, of the 14 directors who said they did not perform the function in 2010, 4 reported having some involvement in the process. For example, 1 director commented that the OSDBU had provided examples of solicitation evaluation criteria for agency procurements. OSDBU directors reported they collaborated with acquisition officials and conducted outreach to small businesses to promote small business contracting. For example, nearly all of the 25 OSDBU directors we surveyed indicated they were involved in acquisition planning. At the seven agencies with major contracting activity, OSDBU officials told us that top agency leaders also participated in outreach events and issued agency policy statements supporting small business efforts. Agencies were held accountable for fostering small business contracting through SBA’s small business goals, and SBA had initiated efforts to identify promising practices OSDBUs can take to further small business contracting. Some OSDBU directors we surveyed reported that inadequate staffing levels and limited budgetary resources were challenges to carrying out their responsibilities. OSDBU directors use a variety of methods—including internal and external collaboration, outreach to small businesses, and oversight of agency small business contracting—to facilitate small business contracting. The OSDBU directors we surveyed and spoke with said that both internal and external collaboration were important to their efforts to promote small business contracting. Twenty-three of the 25 OSDBU directors surveyed viewed involvement in acquisition planning as a role or function of their OSDBUs. Of this number, 15 directors reported they carried out this function to a great or very great extent. For instance, OSDBU directors reported acquisition planning activities such as preparing annual forecasts of contracting opportunities, being a voting member of the agency’s senior procurement council, participating in contracting agreement and review board meetings, and reviewing and providing feedback on draft acquisition plans. Further, the OSDBU directors we interviewed at the seven agencies with major contracting activity all viewed relationships with acquisition staff as important in promoting small business contracting. For example, the OSDBU director at DLA told us that establishing relationships with acquisition management was the most important part of promoting small business contracting. In addition, the OSDBU director at HHS described small business contracting as a “three-legged stool,” with acquisition staff, program staff, and OSDBU staff working together to support it. Four of the seven OSDBU directors stressed the importance of working with acquisition officials from the start of a project rather than trying to integrate them after a project was under way. For instance, the director at HHS stated that the OSDBU spent a great deal of time on early acquisition planning to help ensure that small businesses received consideration throughout the decision-making process. Officials at the Air Force OSDBU also stated that early involvement gave the office time to review acquisitions and discuss small business involvement. These officials noted that reviewing proposed actions late in the process placed the OSDBU in a defensive position. Six of the seven directors we interviewed also told us they participated in acquisition teams as advocates for small business. For instance, Energy has established a monthly Advanced Planning Acquisition Team comprising OSDBU, procurement, acquisition, and SBA officials. The purpose of the team is to review proposed strategies for new and existing acquisitions to identify prime and subcontracting opportunities for small businesses. The OSDBU directors at both Army and DLA are members of acquisition review boards for contracts of more than $500 million and $20 million, respectively. The seven OSDBU directors we interviewed also stated that small business staff in the field, often referred to as small business specialists, reviewed proposed acquisitions for opportunities. For example, the Army OSDBU director told us that the OSDBU reviewed all acquisitions of more than $500 million but that these specialists reviewed all acquisitions over $150,000 that were not set-aside for small businesses and worked with acquisition staff to ensure that small businesses were considered. In addition, the OSDBU directors we interviewed collaborated with other federal agencies to maximize small business contracting. All seven participate in the Federal OSDBU Directors Interagency Council, an informal organization that meets to exchange information on initiatives and processes related to small businesses and outreach events that promote small business contracting. Among other things, the council seeks to identify best practices and share ideas and experiences among federal agencies and private industry to help leverage resources and develop solutions for promoting small business involvement. Nearly all of the OSDBU directors we surveyed saw outreach activities to small businesses as a function of their offices. For instance, 23 of the 25 OSDBU directors viewed hosting conferences for small businesses as one of their responsibilities, and 23 had hosted such an event in the last 2 years. More specifically, these 23 agencies had hosted an average of 20 conferences in the past 24 months. For instance, one OSDBU director reported the agency had sponsored conferences of varying sizes to address contractual requirements that program offices noticed or forecasted. The same director noted the conferences were typically conducted in a networking or “business fair” format, allowing vendors to engage program and contracting officials. In addition, most of the OSDBU directors surveyed (20 of 25) saw sponsoring training programs for small businesses as one of their responsibilities, and 18 had hosted such an event in the last 2 years. For example, OSDBU directors described sponsoring trainings for small businesses in specific socioeconomic groups, providing one-on-one trainings, and offering workshops focused on specific skills such as writing proposals and teaming with larger businesses. The seven OSDBU directors we interviewed provided examples of outreach to the small business community. For example, the OSDBU director at DLA said the OSDBU worked closely with the American Legion to promote contracting to small businesses owned by service-disabled veterans. According to the NASA OSDBU director, small business staff held over 100 outreach events in fiscal year 2010. Other OSDBU officials described their efforts to disseminate information to small businesses through Web sites. For instance, Navy’s OSDBU director stated the agency was working to standardize the Web site formats used by its various units for outreach to small businesses. He explained that OSDBU staff had reviewed the Web sites to determine whether a small business could access information easily, and found retrieving information on small businesses difficult because each of the sites was set up differently. The Air Force maintains a small business Web site that provides small businesses with information on the agency’s contracting opportunities. An OSDBU official stated the site was comprehensive and included the contact information for small business staff, long-range acquisition data, information on various outreach efforts, links to the Air Force’s quarterly newsletter, and articles on small business issues. OSDBUs also provide oversight of agencies’ small business contracting. The FAR requires annual assessments of the extent to which small businesses are receiving a fair share of federal procurements. We found that these assessments varied across the seven agencies we interviewed. For example, the OSDBU at Energy conducts quarterly reviews of program offices. Offices are rated using a color-coded system—green for reaching 95 percent of small business goals and yellow or red for lower percentages. The NASA OSDBU produces monthly reports on the agency’s individual space and research centers’ progress in meeting small business goals and collects data on the centers’ outreach and training efforts. The OSDBU follows up with individual centers that do not meet their goals. At Army, the OSDBU director tracks agency performance using Federal Procurement Data System—Next Generation data. She noted that if the data indicated decreases in small business contracting from prior-year trends, agency leadership would be informed. All seven agencies we interviewed also conducted program management reviews (PMR) or similar reviews to help ensure that small businesses were being considered for contracts and internal controls for ongoing contracts followed. During PMRs, officials review a sample of existing contracts to determine whether proper procedures and internal controls, including those related to small business, have been followed. The OSDBU directors at four agencies (Army, DLA, HHS, and NASA) reported that OSDBU and acquisition officials jointly conducted these reviews. At the Air Force, small business staff in field offices conducted the PMRs, and at Energy, procurement staff conducted them. The Navy OSDBU had replaced PMRs with procurement performance management assessments, which review contracting offices and commands on 11 categories, including involvement of small business in acquisition planning and procurement planning, marketing strategies and approaches, number of set-asides, and inclusion of small business clauses in all contracts. OSDBU officials at the seven agencies we contacted told us that top agency leaders supported small business contracting by participating in outreach events and issuing policy statements. For instance, these OSDBU directors cited the following examples of their agency heads’ participation in outreach events: The DLA Director took part in public forums such as conferences and emphasized small business contracting when interviewed for a magazine article. The NASA Administrator attended the agency’s second annual small business symposium and award ceremony to personally recognize the achievements of small businesses, while the Deputy Administrator handed out awards. In fiscal year 2010, top Army management attended five major outreach events, including the National Veterans Conference, an event that the OSDBU had hosted for 6 years. Top management also promoted small business contracting by issuing agency policy statements or memos. For instance: The Secretary of the Air Force issued a joint memo with the Chief of Staff in January 2009 that encouraged officials to “aggressively seek” small businesses owned by service-disabled veterans for Air Force contracts and encouraged using the OSDBU to help identify strategies and capable firms. The Secretary of Energy issued a memo in November 2009 asking all program offices to work with the OSDBU to promote prime contracting, subcontracting, and financial assistance opportunities for small businesses, including small disadvantaged, Historically Underutilized Business Zone (HUBZone), 8(a), women-owned, and service-disabled veteran-owned small businesses. In June 2009, the Secretary of the Navy issued a memo establishing the Department of the Navy Small Business Award to recognize individuals and teams who have made outstanding contributions in promoting competition and innovation in the Navy acquisition process. The Deputy Secretary at HHS issued a memo in January 2009, which stated that component heads were expected to provide their full support to the small business program at every juncture within the acquisition process. Agencies as a whole are held accountable for furthering small business contracting through SBA’s small business goaling program and scorecard. SBA produces an annual “goaling” report showing the percentage of contracting dollars awarded to small businesses. SBA’s goaling report for fiscal year 2009 shows that 22 percent of eligible contract dollars governmentwide were awarded to small businesses, an amount that was just short of the statutory requirement of 23 percent. As shown in table 3, the percentage awarded to small businesses by the agencies we surveyed ranged from 6 percent to 56 percent of eligible contract dollars. SBA sets annual goals for small business contracting at each agency, basing these goals on the agencies’ past performance, total spending, and purchases of goods and services from small businesses. These goals are set for the agency as a whole, not just for the OSDBU. As we reported in 2009, OSDBU officials at some agencies reported challenges with the goal- setting process, including limitations in negotiating and appealing their goals. At that time, agencies told GAO the goal-setting process was not a negotiation and that SBA did not factor in changes to agencies’ contracting priorities in setting the goals. Two of the seven agencies that we interviewed (Energy and HHS) said that meeting small business goals was difficult because the goods and services the agency purchased were not well suited for small business contracts. For example, 85 percent of Energy’s contracts are facility management contracts that have traditionally been awarded to large businesses and universities to manage operations at sites such as Los Alamos National Laboratory. According to SBA officials, the agency has revised the goal-setting process to make it more collaborative. SBA also assessed all of the agencies we surveyed using its annual small business scorecard to ensure greater accountability. According to SBA, the scorecard fulfills its statutory requirement to report to the President and Congress on achievements by federal agencies against their annual goals. The SBA scorecard evaluates factors such as goals met, progress shown, agency small business strategies, and top-level commitment to meeting goals for 24 agencies and offices and the government as a whole. In fiscal year 2009, SBA updated the scorecard and now assigns a letter grade to each agency. Eighty percent of an agency’s grade is based on its progress in meeting its prime contracting goal, 10 percent on its progress in meeting its subcontracting goal, and 10 percent on a performance rating assigned to the agency by a panel of OSDBU directors. The agencies submit reports on small business achievements that the panel uses to determine performance ratings. The scorecard does not specifically consider the performance of the OSDBU or its director, although the OSDBU director may be involved in some of the activities evaluated. All seven of the OSDBU directors we interviewed are held accountable for promoting small business contracting primarily through internal performance standards and appraisals. Performance standards identify goals and set objectives that are used as key indicators of achievement during annual or midpoint performance appraisals. For example, the NASA OSDBU director explained that, as part of his performance appraisal process, he was reviewed against measurable and achievable goals. These included developing a small business improvement plan for the agency with specific initiatives to meet its small business goals and implementing a training course for agency staff on a standard method to evaluate proposals submitted to the agency. To help OSDBUs improve their capabilities, SBA and others have initiated efforts to identify promising practices that OSDBUs can use to facilitate small business contracting. A White House Interagency Task Force on Federal Contracting Opportunities for Small Businesses, which was established April 2010, identified the need for best practices. The task force issued a report that identified challenges to small businesses such as inadequate training for agency staff. Among other things, it recommended that the executive branch facilitate the identification and rapid adoption of successful practices for increasing opportunities for small businesses. To implement this recommendation, the task force suggested that SBA (1) develop a Web site to share best practices and (2) organize an event for OSDBUs to present best practices for ensuring greater small business participation and catalog and publicize the results. According to SBA officials, they have taken steps to plan these efforts. SBA had already begun to highlight on its Web site best practices for making opportunities available to small businesses and has identified three to date. In addition, during the scorecard process, the panel of OSDBU directors identified best practices at 12 agencies, although these have not been published with this list on SBA’s Web site. The Federal OSDBU Directors Interagency Council also seeks to identify best practices and has identified some that focus on interactions between OSDBUs and small businesses. Most of the 25 OSDBU directors surveyed indicated that inadequate staffing levels and limited budgetary resources were challenges to carrying out their responsibilities to at least some extent (see table 4). These issues were also reported as challenges in 2003. Twenty agencies reported that inadequate staffing levels were a challenge to at least some extent in 2010, compared with 17 agencies in 2003. Seventeen agencies reported that limited budgetary resources were a challenge to at least some extent in 2010 and in 2003. Six agencies reported that inadequate staffing levels were a challenge to a great or very great extent, and seven agencies reported that limited budgetary resources were a challenge to a great or very great extent. In a follow-up interview, OSDBU officials at one agency explained that while they were able to perform their mission, inadequate staffing levels had resulted in increased staff workloads, longer work days, and the need to cross-train staff. They noted that in addition to the functions listed in 15(k) of the Small Business Act, they were also responsible for reviewing grant programs for small business opportunities. The OSDBU director at another agency indicated that because she had only one staff person, her office was unable to review the majority of procurement actions, including those involving contract bundling. She also noted that limited budgetary resources restricted the hiring of additional staff. Additionally, in follow- up interviews, four OSDBU directors noted that limited budgetary resources hindered their efforts to reach out to small businesses. Almost half of the OSDBU directors surveyed also reported that their lack of influence in the procurement process was a challenge to carrying out their responsibilities to at least some extent. Three agencies indicated that this issue represented a challenge to a great or very great extent. In a follow-up interview, OSDBU officials at one agency told us that better coordination was needed between the OSDBU and acquisition officials on issues related to small businesses. While the OSDBU participated in acquisition planning, the officials noted that their role in acquisition decisions was not clear. Another OSDBU director stated that as part of the acquisition approval process, he could disagree with a contract recommendation. However, the director noted that his decision could also be appealed and overruled by an acquisition executive, thus limiting the OSDBU’s influence. At the seven agencies with major contracting activity, trends in staffing and funding levels varied over the past 5 years. From fiscal year 2006 to 2010, OSDBU staffing generally decreased at Energy, increased slightly at DLA, the Navy, and NASA, and stayed the same at the Army and HHS (see table 5). At the Air Force, staffing fluctuated in the interim, but was the same in fiscal year 2010 as it was in fiscal year 2006. Additionally, from fiscal year 2006 to 2010 OSDBU funding generally decreased at the Air Force, the Army, and Energy and increased at DLA, HHS, the Navy, and NASA (see table 6). The OSDBU directors at the seven agencies we interviewed noted a relationship between changes in staffing and funding levels. For example, recent decreases in the Air Force and Energy ODSBU budgets were due to decreases in staffing. Air Force officials told us that the OSDBU had streamlined operations by eliminating some redundant contractor positions and converting some contractors into civilian positions. They noted that these changes had resulted in greater efficiencies and increased quality. Energy officials stated that since the change in administrations in 2008, three political appointees working at the Energy ODSBU had left, halving the number of staff responsible for the same workload. However, they noted that hiring interns and contractors had mitigated staffing constraints and the agency anticipated hiring additional OSDBU staff in the future. Recent increases in the ODSBU funding for Navy and DLA had resulted in increases in staffing. The Navy ODSBU director explained that in addition to funding training and outreach programs, the recent increases in funding were used to create an OSDBU industry analyst position, and two additional positions were being developed. The DLA OSDBU director noted that recent budget increases provided for an increase in staffing but funding for other expenses such as travel had not increased, causing her to have to turn down speaking and outreach events due to lack of funds. Six of the seven OSDBU directors we interviewed told us they would increase the breadth of the activities they currently performed if more resources were available. For example, the OSDBU directors at the Army and Air Force stated they would increase their outreach activities. Additionally, a few OSDBU directors noted that if more resources were available, they would perform additional analysis and offer more training. For example, the DLA OSDBU director stated she would analyze DLA commodity purchases and research trends, assist with more market research, provide more training to small business and other staff, and fill skill gaps within the office. With additional resources, the HHS OSDBU director stated she would increase internal training opportunities for acquisition program staff and provide online training for small businesses. Section 15(k)(3) of the Small Business Act seeks to help ensure that small business advocates within federal agencies have direct access to the highest levels of the agency. However, 7 of the 16 federal agencies that we reviewed were not in compliance with the act. Moreover, all of the agencies that are currently not in compliance were not in compliance in 2003 and have maintained the same or a similar reporting structure, even though at the time we recommended changing them. The OSDBU directors at the nine agencies that were in compliance with section 15(k)(3) cited positive benefits to the reporting relationship, while the OSDBU directors at the seven noncompliant agencies differed on the importance of reporting to the agency head or deputy head. Two thought that not reporting to the agency head or deputy head was a problem, while the other five asserted that the lack of a direct reporting relationship with the agency head had not adversely affected their efforts to advocate for small business contracting. However, we did not find that these arguments justified noncompliance with section 15(k)(3). Continued noncompliance with the requirement undermines the intent of the provision. If the agencies believe their reporting structures are sufficient to ensure that small business contracting receives attention from top management at federal agencies, at a minimum they have the obligation to explain their noncompliance to Congress and provide support for their views, including requesting any statutory flexibilities to permit exceptions as appropriate. One potential mechanism for making this information available to Congress would be through SBA’s annual scorecard process. The noncompliant agencies could include their rationale for their reporting structure in the annual reports that they submit to SBA, and SBA could include such information in its scorecard report to Congress. Given the ongoing requirement in the Small Business Act that OSDBU directors report to agency heads or deputy heads, we recommend that the heads of the Departments of Agriculture, Commerce, the Interior, Justice, State, and the Treasury and the Social Security Administration take steps as necessary to comply with the requirement or report to Congress on why they have not complied. Such information could be included in SBA’s annual scorecard report to Congress. Moreover, agencies that have not complied with the requirement could seek any statutory flexibilities or exceptions they believe may be appropriate. We sent a draft of this report to 26 agencies for their review and comment. Of the nine agencies that we concluded were complying with section 15(k)(3) of the Small Business Act, only the Department of Education provided written comments. In those comments, the agency noted that since our 2003 report it had taken definitive corrective steps to achieve and remain in compliance with all applicable requirements and stated that it was pleased with our recognition of its changes to the reporting structure. (See appendix IV for Education’s written comments.) None of the remaining eight agencies found to be in compliance with section 15(k)(3)—the Departments of the Air Force, the Army, Energy, and the Navy; DLA; EPA; HHS; and NASA—provided any comments on the draft report. Of the seven agencies that we concluded were not complying with section 15(k)(3), the Departments of Commerce, the Interior, Justice, State, and the Treasury and SSA provided written comments. These written comments are reproduced in appendixes V through X. The Department of Agriculture did not comment on the draft report. Of the six agencies that provided written comments, the Departments of Commerce, Justice, State, and the Treasury disagreed with our conclusion that their reporting relationships did not comply with section 15(k)(3) of the Small Business Act. The Social Security Administration agreed to revise its reporting structure, while the Department of the Interior stated it planned to evaluate our recommendation and options to resolve the issue. None of the agencies’ comments caused us to revise our conclusions or recommendations. The six agencies’ specific comments and our responses are summarized below. The Department of Commerce stated that the agency was in compliance with section 15(k)(3) because the OSDBU director reported directly to the Deputy Secretary on all legislative and policy issues and to the Chief Financial Officer and Assistant Secretary for Administration on administrative matters such as personnel and budget. The comment letter also cited the agency’s small business achievements, such as exceeding small business goals. As noted in the draft report, the OSDBU director stated that she reported to the Deputy Assistant Secretary for Administration for all small business matters and to the Assistant Secretary for Administration for administrative matters such as budget and personnel. Agency documents, such as the organization chart and the OSDBU director’s two most recent performance appraisals, confirmed this reporting relationship. Further, the OSDBU director stated she had never met the Secretary or previous Deputy Secretary. She met the new Acting Deputy Secretary in December 2010, but did not have a reporting relationship with her. Therefore, we did not revise our conclusion or recommendation. The Department of the Interior stated that it did not have comments on the draft report, but would be evaluating our recommendation and options to resolve the issue. The letter also indicated that the agency would report back to GAO once it had finalized its plans. The Department of Justice agreed that its OSDBU was located within the Justice Management Division. However, its comment letter stated that this arrangement was in place for administrative purposes, and that the OSDBU director reported directly to the Deputy Attorney General on matters of substance. The letter stated that through this organizational structure, the Deputy Attorney General ensured that small businesses were provided the maximum practicable opportunity to participate in contracting opportunities throughout the agency. The letter also commented that the current placement of the OSDBU allowed for the efficient management and implementation of the small business contracting programs that were vital to the agency in satisfying its mission. As we noted in the draft report, agency documentation such as the OSDBU director’s position description and performance appraisals indicated that the director reported to the Deputy Assistant Attorney General for Policy, Management and Planning. Further, the OSDBU director told us during our review that he had never met the Deputy Attorney General. Therefore, we did not revise our conclusion or recommendation. The Department of State disagreed with the report’s conclusions. The letter pointed out that the Assistant Secretary of State for Administration, who is the designated OSDBU director, reports to the Deputy Secretary concerning small business activities. This information was included in the draft report. However, we concluded that the agency was not in compliance with section 15(k)(3) because the Assistant Secretary had delegated his OSDBU responsibilities to a lower-level official who did not report to the Secretary or Deputy Secretary. Regarding this conclusion, State commented that section 15(k)(3) permitted the delegation of functions from the Assistant Secretary to the OSDBU Operations Director and stated that while OSDBU directors were responsible for implementing and executing the specific functions and duties assigned under sections 8 and 15 of the Small Business Act, section 15(k)(3) contained no requirement that the director personally perform any specific functions. The letter further commented that executive branch authority was typically exercised through delegation, with an agency’s basic authority being vested in the agency head and subsequently redelegated. It cited the case of Fleming v. Mohawk Wrecking & Lumber Co., 331 U.S. 121 (1947) as an example in which the authority to redelegate is implied. However, as we stated in our 2003 report, the Fleming case recognizes that the delegation of authority may be withheld by implication, and we believe section 15(k)(3) does exactly that. As explained in this report, to ensure that the OSDBU responsibilities are effectively implemented, the statute mandates that the OSDBU director (i.e., the person carrying out the responsibilities) have immediate access and be responsible only to the agency head or deputy. The legislative history reveals that the reason for this requirement is that Congress believed that agency officials responsible for promoting procurements for small and disadvantaged businesses were often too far down the chain of command to be effective. The reporting requirement of section 15(k)(3) was intended to remedy this situation. The Department of State’s letter also highlighted its small and disadvantaged business goal achievements and commented that reorganizing the OSDBU so that its Operations Director reported directly to the Secretary or her Deputy would decrease efficiency. The letter concluded by stating that the OSDBU Operations Director currently had direct access whenever necessary to decision makers in the Department programs that utilize small and disadvantaged businesses. As indicated in the recommendation, agencies that believe that they should have greater flexibility should pursue this with Congress. The Department of the Treasury disagreed with the report’s conclusion and commented that the statute did not require that the OSDBU director be an official assigned to small business issues full time. The letter stated that the Assistant Secretary for Management, in the capacity of OSDBU director, had a direct reporting relationship to the Deputy Secretary of the Treasury and provided oversight and direction to the Office of Small Business Programs as well as to bureau heads and their procurement officials in executing the agency’s responsibilities under the Small Business Act. The letter noted that the Assistant Secretary exerted considerable influence over acquisition and budget officials at all levels in regards to attaining small business goals. Finally, the letter commented that the OSDBU director assigning day-to-day operations pertaining to small business functions to subordinate officials did not establish any such other officials as the “de facto OSDBU director.” As previously discussed, we believe that section 15(k)(3) includes an implied prohibition against delegating the OSDBU director’s authority. As a result, we did not revise our conclusion or recommendation. The Social Security Administration’s letter did not provide comments on the draft report but indicated that the agency had reevaluated the reporting relationship in light of our draft. The letter stated that in the future the OSDBU director would be reporting to the Deputy Commissioner, the deputy agency head for SSA. Of the remaining 10 agencies that received a copy of the draft report, 9 agencies—the Departments of Homeland Security, Housing and Urban Development, Labor, Transportation, and Veterans Affairs; the General Services Administration; the Office of the Secretary of Defense; the Office of Personnel Management; and the U.S. Agency for International Development—did not provide any comments on the report. The Small Business Administration provided technical comments, which have been incorporated as appropriate. We also provided a copy of our survey results, which will be published in a separate product (GAO-11-436SP), to the 25 agencies we surveyed and SBA for their review and comment. Only one agency had a comment. The Department of State provided a comment via email stating that the survey questions focused on the daily duties of the OSDBU, but did not provide an opportunity to explain the delegation of these duties to the OSDBU Operations Director. We note that while the survey was designed to capture the OSDBU director’s activities (and not the delegation of such duties per se), there were numerous open-ended questions that allowed respondents to add explanations or qualifications to their responses. State also had the opportunity to explain the delegation of duties during our interviews with OSDBU officials. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Ranking Member of the Senate Committee on Small Business and Entrepreneurship, the Chairman and Ranking Member of the House Committee on Small Business, and other interested congressional committees. The report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your office have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix XI. We reviewed Office of Small and Disadvantaged Business Utilization (OSDBU) practices for carrying out the requirements of the Small Business Act at federal agencies with major contracting activity. Specifically, we (1) assessed whether the director reported directly to the agency head or the deputy head; (2) determined the functions conducted by the OSDBUs; and (3) examined the actions taken by the OSDBUs and other officials at the various agencies to further small business contracting opportunities and the effects of funding and staff levels on these efforts. To determine which federal agencies engage in major contracting activity, we reviewed fiscal year 2009 data from the Federal Procurement Data System-Next Generation. (These were the most recent data available at the time of our review.) This dataset includes information on 20 defense agencies and 62 civilian agencies. Using these data, we determined that seven agencies each procured more than $15 billion in goods and services in fiscal year 2009. Table 7 shows the seven agencies with major contracting activity as well as the other agencies covered in our review. To assess whether the OSDBU director reports directly to the agency head or the deputy head as required by section 15(k)(3) of the Small Business Act, we focused on the seven agencies with major contracting activity and nine additional agencies that we reported in September 2003 were not complying with this requirement. These nine agencies were the Departments of Agriculture, Commerce, Education, the Interior, Justice, State, and the Treasury; the Environmental Protection Agency; and the Social Security Administration. We considered agencies to be in compliance if the designated OSDBU directors exercised the OSDBU responsibilities and reported directly to and were responsible only to the agency head or the agency head’s deputy. To determine compliance, we reviewed organization charts to identify where the OSDBU was located in relation to the agency head or deputy head; OSDBU directors’ performance appraisals for the previous 2 years to identify the agency official who evaluated the OSDBU director’s performance; the most recent position description of the OSDBU director to identify the OSDBU director’s supervisor; and various other agency documents, such as reports and memoranda discussing the agency’s small business programs. We also interviewed the designated OSDBU directors at each agency to identify the official(s) they had reported to during the past year and asked them to provide information characterizing the reporting relationship, such as the extent to which small business issues were discussed. In addition, we reviewed and analyzed section 15(k)(3). To obtain information on the functions conducted by the OSDBU, actions taken by the OSDBU to further small business contracting opportunities, and the effects of funding and staff levels on these efforts, we surveyed the OSDBU directors at 25 federal agencies using a Web-based survey. The survey was similar to one we administered in 2003 and asked the OSDBU directors about their roles and functions in three areas: participation in the agency procurement process, facilitation of small business participation in agency contracting, and interaction with the Small Business Administration (SBA). The survey questions covered the OSDBU functions listed in 15(k) of the Small Business Act as well as additional functions the OSDBUs might perform. In addition, the survey asked OSDBU directors about challenges—including limited budgetary resources and lack of adequate staffing levels—they face in carrying out their responsibilities. We selected 25 agencies to include in our survey of OSDBU directors. These agencies included all 20 civilian agencies that procured more than $800 million in goods and services in fiscal year 2009, as well as the Department of Defense (DOD)—Office of the Secretary; the Departments of the Air Force, Army, and Navy; and the Defense Logistics Agency (DLA). The 20 agencies were responsible for more than 98 percent of civilian agency obligations in fiscal year 2009. We selected the Air Force, Army, Navy, and DLA because they were the four components within DOD that procured the most goods and services in fiscal year 2009; together, they were responsible for more than 90 percent of DOD’s obligations. The 25 agencies we selected included all 24 agencies we surveyed in 2003 and the Department of Homeland Security. To have comparable data with the 2003 survey of OSDBU directors, our survey instrument listed the same questions and response choices as the 2003 survey. Updates to the 2003 survey were limited to making minor word changes, reordering several questions, and deleting several questions that were no longer relevant. We obtained input from GAO experts on survey design. We also pretested the survey instrument with two OSDBU directors to help ensure that the questions were still applicable and would be correctly interpreted by respondents. Agency officials, including the OSDBU directors, were notified about the survey before it was launched on November 1, 2010. OSDBU directors were asked to complete the survey by November 22, 2010, and by December 29, 2010, we had a 100 percent response rate. From January to March 2011, we conducted follow-up with 22 OSDBU directors who answered that one or more of the functions listed in 15(k) of the Small Business Act was not a function of their office or who did not provide responses to these questions. The purpose of the follow-up was to determine which office, if not the OSDBU, carried out these functions at their agency or to collect answers from OSDBU directors who did not provide them initially. To do this, we conducted interviews with 11 ODSBU directors and corresponded via e-mail with 11 others. Based on this follow-up, we changed 16 of the original survey answers related to whether the OSDBU director viewed a section 15(k) responsibility as an OSDBU role. Answers were changed only if at least one of the following criteria were satisfied: the OSDBU directors explicitly stated that they wished to change their answer and provided an explanation for the change; the director misunderstood the question; or the director provided a response to an initially unanswered question. Additionally, we asked 11 of the 22 OSDBU directors with whom we were conducting follow-up for further explanation of challenges they had identified as affecting their office to a great or very great extent. Two of the respondents requested that we change their responses to several challenges due to initially misunderstanding the question. We agreed and these adjustments are reported in our findings. Also, as necessary, we followed-up with OSDBU directors to clarify their responses to our open- ended questions. While the OSDBU directors at 25 agencies were asked to participate in the survey and the survey results are therefore not subject to sampling errors, not all respondents answered every question. Nonresponse, including item nonresponse, and the practical difficulties of conducting any survey, may introduce error in survey results. We took steps to minimize such errors by conducting follow-up discussions with respondents who failed to answer specific questions, and by checking and verifying survey responses and analysis. The survey contained closed-ended questions that we asked OSDBU directors to answer by selecting from a finite number of response categories. For example, some questions asked OSDBU directors to select “Yes—an OSDBU role or function” or “No—not an OSDBU role or function” based on if a certain function was performed by their office. Other questions asked OSDBU directors to identify the extent to which they performed a certain function or the extent to which a certain factor was a challenge in carrying out the responsibilities of their office. Our analysis involved reviewing the frequency of responses to a given question using aggregate survey data. In the report, there are instances in which we identify all of the responses and other instances in which we identify the most common response. This report does not contain all the results from the survey; the survey and a more complete tabulation of the results are provided in a supplement to this report (GAO-11-436SP). The survey also contained open-ended questions that asked OSDBU directors to provide a narrative response. Most of these open-ended questions provided respondents the opportunity to explain answers provided to close-ended questions. For example, some closed-ended questions asked the respondents if a certain activity was a function of their office and the subsequent open-ended question asked them to elaborate on which office carries out this role or function if they had responded “No— not an OSDBU role or function” to the prior question. We used these open- ended responses to provide context to close-ended questions, and some of these narrative responses were included in our findings. To examine the actions taken by the OSDBU and other agency officials to further small business contracting opportunities and the effects of funding and staff levels, we interviewed agency officials and reviewed documents at the seven agencies with major contracting activity. Also, we reviewed agency documentation, such as policy statements issued by agency leadership on ODSBU practices or small business efforts, small business manuals or strategic plans, and budget and staffing documentation. We interviewed the OSDBU directors at the seven agencies on the actions they and other agency officials had taken to further small business contracting; methods used by SBA and the agency to hold officials accountable for furthering small business contracting; and challenges that affected these efforts, such as funding and staffing levels. Furthermore, we reviewed documentation and data related to the SBA small business goaling program and spoke with SBA officials about this program and reviewed OSDBU council documentation and spoke with council leadership. We conducted our work from June 2010 to May 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Re Agen 15(k)(3) As discussed in the body of our report, seven agencies were not in compliance with section 15(k)(3) of the Small Business Act, which requires that the director of the Office of Small and Disadvantaged Business Utilization (OSDBU) be responsible only to and report directly to the agency head or deputy head. We found that a variety of reporting structures were in place. OSDBU directors either reported to lower-level officials than the agency head or deputy or had delegated their OSDBU director responsibilities to officials who did not report to either the agency head or the deputy head. At the Departments of Commerce, the Interior, and Justice and the Social Security Administration (SSA), the OSDBU directors reported to officials at lower levels than the agency head or deputy head. A document outlining the organization structure of the Department of Commerce’s Office of the Chief Financial Officer and Assistant Secretary for Administration stated that the OSDBU director reported to the Deputy Secretary on matters of policy and to the Assistant Secretary and Deputy Assistant Secretary for Administration on administrative matters. However, the OSDBU director at Commerce stated that she reported to the Deputy Assistant Secretary for Administration for all small business matters and to the Assistant Secretary for Administration for administrative matters such as budget and personnel. The organization chart also showed a direct link from the OSDBU to the Deputy Assistant Secretary for Administration and the Assistant Secretary for Administration (see fig. 3). In fiscal year 2009, the Assistant Secretary for Administration evaluated the OSDBU director’s performance, while the Deputy Assistant Secretary for Administration signed her performance appraisal in fiscal year 2010. The OSDBU director stated that she had never met the Secretary or previous Deputy Secretary and only met the new Acting Deputy Secretary in December 2010. The OSDBU director at the Department of the Interior reported to the Deputy Assistant Secretary for Budget, Finance, Performance, and Acquisition and to the Assistant Secretary, Policy, Management and Budget (see fig. 4). The organization chart had a notation that the OSDBU director reported to the Secretary and received administrative support from the Deputy Assistant Secretary for Budget, Finance, Performance, and Acquisition, who in turn reported to the Assistant Secretary, Policy, Management and Budget. However, the OSDBU director told us he had never met with the Secretary or Deputy Secretary and they had not provided direct input into his performance appraisals. Instead, the OSDBU director told us he met frequently with the Deputy Assistant Secretary for Budget, Finance, Performance, and Acquisition on administrative and small business matters. The director’s position description indicated that the director reports to the Deputy Assistant Secretary. His two most recent performance appraisals were signed by the Deputy Assistant Secretary and the Assistant Secretary for Policy, Management and Budget, respectively. As shown in figure 5, the OSDBU director at Justice reported to the Deputy Assistant Attorney General for Policy, Management and Planning. OSDBU officials told us the current reporting structure was the same structure that was in place in 2003. The organization chart showed that the OSDBU was located within the Justice Management Division, with the director under the supervision of the Deputy Assistant Attorney General for Policy, Management and Planning. The Justice Management Division was headed by the Assistant Attorney General for Administration, who reports to the Deputy Attorney General. While the organization chart showed that the OSDBU was located within the Justice Management Division for administrative purposes, the OSDBU director’s position description listed his immediate supervisor as the Deputy Assistant Attorney General for Policy, Management and Planning. The Deputy Assistant Attorney General signed the OSDBU director’s two most recent performance appraisals. The OSDBU director at SSA reported to the Deputy Commissioner, Office of Budget, Finance and Management, who was one of nine deputy commissioners managing various programs and operations (see fig. 6). Both the organization chart and the OSDBU director’s position description confirmed this reporting relationship. The OSDBU director told us that he reported to this Deputy Commissioner for small business matters. Additionally, the director’s most recent performance appraisal was signed by the Deputy Commissioner, Office of Budget, Finance and Management. The director confirmed that this same structure was in place in 2003. The OSDBU director also told us he had never met either the Commissioner or Deputy Commissioner of SSA. As we found in our 2003 report, the designated OSDBU directors at the Departments of Agriculture, State, and the Treasury delegated their responsibilities to officials who did not directly report to either the Secretaries or Deputy Secretaries. At these agencies, the Assistant Secretary who managed the agency’s administrative functions was designated as the statutory OSDBU director. The Assistant Secretaries then delegated nearly all of their OSDBU responsibilities to lower-ranking officials who reported directly to the Assistant Secretaries. The lower- ranking officials thus became the de facto OSDBU directors. The designated OSDBU director at the Department of Agriculture was the Assistant Secretary for Administration, who reported to the Secretary and Deputy Secretary. However, the Assistant Secretary had delegated nearly all of his OSDBU responsibilities to a lower-level official (see fig. 7). This structure was the same one that we determined in 2003 was not in compliance with the Small Business Act. The delegated OSDBU director told us that he did not report to the Secretary or Deputy Secretary on matters involving policy, budget, and personnel. The agency’s organization chart confirmed that the delegated OSDBU director reported to the Assistant Secretary for Administration. The Assistant Secretary for Administration and the Secretary signed the director’s most recent performance appraisal. Other evidence showed that the delegated OSDBU director carried out the day-to-day implementation of the agency’s OSDBU. The delegated director told us he handled the day-to-day duties and functions of Agriculture’s OSDBU and that he spent 100 percent of his time on OSDBU duties and functions. Moreover, his position description indicated that he was the official responsible for carrying out the duties and functions listed under section 15(k). The position description stated, among other things, that the delegated director was responsible for (1) establishing short- and long- range program objectives, time schedules, and courses of action for the accomplishment of small business goals; (2) formulating, recommending, and implementing broad policies and procedures that provide the structural framework for all OSDBU functions; and (3) keeping abreast of all OSDBU activities and initiating any corrective actions deemed necessary. In contrast, OSDBU officials stated that the Assistant Secretary for Administration spent time on OSDBU activities on an as-needed basis but estimated that it averaged about 3 to 4 hours per week. The Assistant Secretary for Administration was the designated OSDBU director at the State Department. The Assistant Secretary, who reported to one of the department’s two Deputy Secretaries on small business matters, had delegated his OSDBU responsibilities to the Operations Director for the OSDBU (see fig. 8). In fiscal year 2010, the Operations Director’s performance appraisal was signed by the Acting Assistant Secretary for Administration. The position description for the Operations Director indicated that he carried out the functions of the OSDBU director. For example, it showed that his duties included (1) providing overall direction for policies and programs governing the agency’s procurement and financial assistance actions in accordance with the Small Business Act and (2) developing small business goals. The Assistant Secretary of the Treasury for Management/Chief Financial Officer/Chief Performance Officer was the designated OSDBU director. He stated that he was responsible for meeting the agency’s small business goals and interacted with the Secretary and Deputy Secretary regularly, including providing updates on small business matters. However, the Director of the Office of Small Business Programs—an official who did not directly report to either the Secretary or the Deputy Secretary—was responsible for day-to-day management of Treasury’s small business programs. According to Treasury, the Director of the Office of Small Business Programs reported to the Director of the Office of Minority and Women Inclusion, who in turn reported to the Assistant Secretary (see fig. 9). The Director of the Office of Small Business Programs stated that she spent 100 percent of her time on small business matters, which included all of the functions described in section 15(k) of the Small Business Act. Her position description confirmed this statement, indicating that her responsibilities included (1) planning, developing, issuing, and providing overall direction for policies and programs governing Treasury procurement and financial assistance action in accordance with the Small Business Act and (2) directing Treasury’s annual goal-setting process. The number of Office of Small and Disadvantaged Business Utilization (OSDBU) directors surveyed who did not view a section 15(k) function as their current responsibility varied, depending on the specific function. The number ranged from 1 who did not view maintaining supervisory authority over OSDBU personnel as a function to 11 who did not view assisting small businesses to obtain payments from prime contractors as a responsibility. In their written comments and follow-up interviews, the directors who did not view a section 15(k) function as their responsibility generally stated that contracting, acquisition, or program staff performed it. The OSDBU director at the Social Security Administration (SSA) reported that maintaining supervisory authority over OSDBU personnel was not a function of his office because he did not have staff. The OSDBU director at the Office of Personnel Management (OPM) reported that attempting to identify proposed solicitations that involved bundling of contract requirements was not a function of his office. He commented that the contracting office within his agency performed this function. The OSDBU director at SSA reported that working with agency acquisition officials to revise procurement strategies for bundled contract requirements was not a function of his office. He commented that no office carried out this role. Rather, he noted that when contract bundling was identified, the acquisition official prepared a bundling justification for the head of the procuring activity to sign. In a follow-up interview, he clarified that nothing in the agency’s policies required coordination with the OSDBU on contract bundling or gave the OSDBU the opportunity to revise procurement strategies. Five OSDBU directors reported that facilitating small businesses’ participation as subcontractors to bundled contracts was not a function of their office. At the Department of Agriculture and OPM, the OSDBU directors commented that their agencies had not bundled any contracts. The Agriculture OSDBU director also stated that his office evaluates proposed contract actions to ensure that there are no bundled contracts. The OSDBU director at the Office of the Secretary of Defense reported that this function was generally performed by the contracting offices at the Department’s individual components, such as the Army, Navy, Air Force, and Defense Logistics Agency (DLA). In a follow-up interview, the OSDBU director at SSA stated that his role was limited by agency policy to reviewing subcontracting plans to ensure that certain clauses required by the Federal Acquisition Regulation were included. He noted that he would need additional resources to advocate for increased small business participation in subcontracting. The OSDBU director at the Department of Commerce explained that the OSDBU did not have the staff to review subcontracting plans. Six OSDBU directors reported that assisting small businesses to obtain payments from their agencies was not a function of their office. All six directors—Departments of the Air Force, Education, and the Interior; the Environmental Protection Agency (EPA); Office of the Secretary of Defense; and SSA—reported that payment issues were addressed by agency officials in the contracting, acquisition, or program offices. Seven OSDBU directors reported that determining a small business set- aside for an individual contract was not a function of their OSDBU. Five of these directors (Departments of the Army, Education, and Housing and Urban Development (HUD); the Office of the Secretary of Defense; and OPM) commented that their agency’s contracting or program offices performed this function. The OSDBU directors at the Departments of Transportation and Energy commented that they reviewed acquisitions over a certain threshold level ($150,000 at Transportation and $3 million at Energy). Ten OSDBU directors reported that assigning a small business technical advisor to each office with an SBA procurement center representative was not a function of their office. In follow-up communication, the Acting OSDBU director at the Department of Veterans Affairs (VA) explained that his office had not assigned a small business technical advisor to each office with a procurement center representative but noted that OSDBU staff performed duties similar to those of a technical advisor. The Acting OSDBU director at the Department of Energy explained that because the agency had already implemented several levels of review by various technical and procurement staff, it had delayed hiring and assigning a technical advisor to contracting offices. However, he stated that the office was reassessing their review processes and resources to determine when such a hire would be feasible. The remaining eight OSDBU directors at the Departments of the Air Force, the Army, Commerce, the Interior, Justice, the Navy, and Transportation and DLA reported that the contracting offices within their agencies assigned small business technical advisors. For instance, Air Force officials commented that small business technical advisors were assigned only to field sites where they could assist in identifying specific opportunities for small businesses. Eleven OSDBU directors reported that assisting small businesses to obtain payments from prime contractors was not one of their functions. Seven of these directors (Departments of Agriculture, the Air Force, and Education; EPA; HUD; Office of the Secretary of Defense; and SSA) commented that contracting, acquisition, or program officials carried out this function at their agencies. Two of the seven directors clarified in follow-up interviews that they were not privy to subcontractor information. The OSDBU director at OPM commented that because the payment of invoices by a prime contractor to its subcontractors is part of a contractual arrangement to which the government is not a party, this function should not be performed by anyone at the agency. The OSDBU director at the Department of Transportation commented that the office provides counseling on progress payments and prompt payment guidance to small businesses. The OSDBU directors at Interior and the U.S. Agency for International Development commented that if a small business were to contact the OSDBU for payment assistance, the OSDBU would facilitate communication with the contracting officer responsible for payments. In addition to the contact named above, Paige Smith (Assistant Director), Farah Angersola, Tania Calhoun, Emily Chalmers, Janet Fong, Colleen Moffatt, Marc Molino, Kelly Rubin, Rebecca Shea, Andrew Stavisky, and William Woods made key contributions to this report. | Section 15(k) of the Small Business Act requires that all federal agencies with procurement powers establish an Office of Small and Disadvantaged Business Utilization (OSDBU) to advocate for small businesses. Section 15(k)(3) requires that OSDBU directors be responsible only to and report directly to agency or deputy agency heads. GAO was asked to assess agencies' compliance with the reporting structure and identify the functions OSDBUs performed. GAO reviewed compliance with section 15(k)(3) at 16 agencies--the 7 agencies that each procured more than $15 billion in goods and services in 2009 and 9 that it had previously reported were not complying with this requirement. GAO also surveyed the OSDBU directors at 25 agencies that represented more than 98 percent of civilian obligations and 90 percent of DOD obligations in 2009. Nine of the 16 federal agencies that GAO reviewed were in compliance with section 15(k)(3) of the Small Business Act, which requires OSDBU directors to be responsible only to and report directly to the agency or deputy agency head. The remaining seven agencies were not in compliance with the provision, and their OSDBU directors reported to lower-level officials or had delegated OSDBU responsibilities to officials who did not meet the reporting requirement. These agencies were not in compliance when GAO last examined them in 2003. During GAO's current review, directors who reported to agency heads cited benefits to the relationship, while those who did not had mixed views. GAO concluded that the views expressed by the directors at noncompliant agencies did not justify noncompliance and that these agencies should comply or provide support to Congress of their need, if any, for statutory flexibilities. Ongoing noncompliance with section 15(k)(3) undermines the intent of the act and may prevent some OSDBU directors from having direct access to top agency management. Consistent with its 2004 report, GAO's current work found that the 25 OSDBU directors surveyed focused their procurement activities on certain functions listed in section 15(k). At least 19 directors listed the five functions related to contract bundling, maintaining supervisory authority over staff, and helping small businesses obtain payments from agencies as among their duties. Fewer directors viewed the remaining three functions, such as reviewing acquisitions for small business set-asides and assisting small businesses to obtain payments from prime contractors, as duties. Directors who did not view these functions as their responsibility generally noted that contracting or program staff performed them. Whether OSDBU directors who do not perform certain functions listed in 15(k) are complying with the statute is not clear. GAO recommends that agencies not in compliance with section 15(k)(3) take steps to comply with this statutory requirement or report to Congress on why they have not complied, including any requests for statutory reporting flexibility as appropriate. SSA agreed with the recommendation, and Interior agreed to reevaluate its reporting structure. Commerce, Justice, State, and the Treasury disagreed, believing they were in compliance. GAO maintains its position on agencies' compliance status, as discussed further in the report. Agriculture did not comment. |